Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-01-10T22:29:12.109Z Has data issue: false hasContentIssue false

Temporal waveform denoising using deep learning for injection laser systems of inertial confinement fusion high-power laser facilities

Published online by Cambridge University Press:  03 January 2025

Wei Chen
Affiliation:
Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China
Xinghua Lu*
Affiliation:
Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China
Wei Fan*
Affiliation:
Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China
Xiaochao Wang
Affiliation:
Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China
*
Correspondence to: X. Lu and W. Fan, Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China. Emails: [email protected] (X. Lu); [email protected] (W. Fan)
Correspondence to: X. Lu and W. Fan, Key Laboratory of High Power Laser and Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China. Emails: [email protected] (X. Lu); [email protected] (W. Fan)

Abstract

For the pulse shaping system of the SG-II-up facility, we propose a U-shaped convolutional neural network that integrates multi-scale feature extraction capabilities, an attention mechanism and long short-term memory units, which effectively facilitates real-time denoising of diverse shaping pulses. We train the model using simulated datasets and evaluate it on both the simulated and experimental temporal waveforms. During the evaluation of simulated waveforms, we achieve high-precision denoising, resulting in great performance for temporal waveforms with frequency modulation-to-amplitude modulation conversion (FM-to-AM) exceeding 50%, exceedingly high contrast of over 300:1 and multi-step structures. The errors are less than 1% for both root mean square error and contrast, and there is a remarkable improvement in the signal-to-noise ratio by over 50%. During the evaluation of experimental waveforms, the model can obtain different denoised waveforms with contrast greater than 200:1. The stability of the model is verified using temporal waveforms with identical pulse widths and contrast, ensuring that while achieving smooth temporal profiles, the intricate details of the signals are preserved. The results demonstrate that the denoising model, trained utilizing the simulation dataset, is capable of efficiently processing complex temporal waveforms in real-time for experiments and mitigating the influence of electronic noise and FM-to-AM on the time–power curve.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press in association with Chinese Laser Press

1 Introduction

Inertial confinement fusion (ICF) utilizes high-power laser drivers to generate high-energy laser pulses and aims to generate a shockwave of sufficient magnitude and maintain a sustained compression of the targeted fuel pellet, thereby initiating fusion reactions[ Reference Moses, Lindl, Spaeth, Patterson, Sawicki, Atherton, Baisden, Lagin, Larson, MacGowan, Miller, Rardin, Roberts, Van Wonterghem and Wegner1]. In high-power laser facilities, such as the National Ignition Facility (NIF)[ Reference Spaeth, Manes, Kalantar, Miller, Heebner, Bliss, Spec, Parham, Whitman, Wegner, Baisden, Menapace, Bowers, Cohen, Suratwala, Di Nicola, Newton, Adams, Trenholme, Finucane, Bonanno, Rardin, Arnold, Dixit, Erbert, Erlandson, Fair, Feigenbaum, Gourdin, Hawley, Honig, House, Jancaitis, LaFortune, Larson, Le Galloudec, Lindl, MacGowan, Marshall, McCandless, McCracken, Montesanti, Moses, Nostrand, Pryatel, Roberts, Rodriguez, Rowe, Sacks, Salmon, Shaw, Sommer, Stolz, Tietbohl, Widmayer and Zacharias2, Reference Spaeth, Manes, Bowers, Celliers, Di Nicola, Di Nicola, Dixit, Erbert, Heebner, Kalantar, Landen, MacGowan, Van Wonterghem, Wegner, Widmayer and Yang3], the Laser Megajoule (LMJ) facility in France[ Reference Denis, Beau, Deroff, Lacampagne, Chies, Julien, Bordenave, Lacombe, Vermersch and Airiau4, Reference Miquel, Lion and Vivini5] and the SG series facilities[ Reference Fan, Jiang, Wang, Wang, Huang, Lu, Wei, Li, Pan, Qiao, Wang, Cheng, Zhang, Huang, Xiao, Zhang, Li, Zhu and Lin6, Reference Li, Wang, Jin, Huang, Wang, Su and Zhao7] in China, the front-end system is required to provide high-quality laser pulses and have accurate time–power curve control. The pulse shaping system, which includes the pulse shaping unit and the feedback system, can achieve high-precision pulse shaping and closed-loop control, so that the specific pulse can provide reliable technical support for physical experiments and evaluation of the performance of the facility.

In ICF high-power laser drivers, the laser pulses of the front-end system, having undergone phase modulation, are subjected to diverse nonuniform filtering effects imparted by the subsequent optical system during transmission. Then the resulting spectral aberrations are simultaneously transformed into waveform modulation in the temporal domain, which is called frequency modulation-to-amplitude modulation conversion (FM-to-AM) effect[ Reference Rothenberg, Browning and Wilcox8, Reference Penninckx, Beck, Gleyze and Laurent9]. On the other hand, the pulse shaping system requires closed-loop feedback control using temporal waveforms of regenerative amplified output, which will be affected by electronic noise. The presence of FM-to-AM and electronic noise significantly impacts the accuracy and efficiency of pulse shaping processes, particularly in high-contrast (the ratio of the highest power to the lowest power in the pulse time–power curve) temporal waveforms and those with multiple pre-pulses, where the intensity information of the fronts or pre-pulses will be obscured. Therefore, it is necessary to use denoising algorithms to recover the smooth temporal waveforms without modulation. Currently, the shaping unit mainly utilizes the cumulative averaging algorithm (CAA) and orthogonal matching algorithm (OMP) to process pulses[ Reference Pati, Rezaiifar and Krishnaprasad10, Reference Huang, Lu, Jiang, Wang, Qiao and Fan11], but as with traditional algorithms such as mean filtering, exponential smoothing and wavelet transforms[ Reference Zhang, Zhu, Kuang and Ke12], these methods have difficulty in accurately removing the noise and FM-to-AM, and are also prone to introducing new signal distortions during computation. Qian et al. [ Reference Qian, Fan, Lu and Wang13] proposed a pulse smoothing algorithm based on the combination of wavelet threshold denoising (WTD) and first-order derivative adaptive smoothing filtering (FDASF), which effectively suppresses the effects of electronic noise and FM-to-AM on the time–power curve. However, this algorithm necessitates adjustments to its fitting parameters specifically for high-contrast temporal waveforms and those containing pre-pulses, yet the outcomes remain suboptimal and unsatisfactory. In recent years, deep learning models have been widely used in high-power laser facilities because of the powerful nonlinear mapping capability, which is able to extract features from a large amount of data to satisfy the modelling of complex physical processes[ Reference Li, Liu, Yang, Peng and Zhou14 Reference Döpp, Eberle, Howard, Irshad and Lin16]. As a data-driven model, deep learning is ideally suited for processing and analysing data features in tasks pertaining to temporal waveform shaping. Luo et al. [ Reference Luo, Tian, Li, Ni, Xie and Zhou17] proposed a convolutional neural network similar to the U-Net architecture for predicting the initial pulse waveform in a closed-loop control system of a high-power laser facility, which improved the efficiency of the control software. Liao et al. [ Reference Liao, Huang, Geng, Yuan and Hu18] proposed a neural network with series residual modules for predicting pulse waveforms in the front-end system of a high-power laser facility. Zou et al. [ Reference Zou, Geng, Liu, Chen, Zhou, Peng, Hu, Yuan, Liu and Liu19] proposed and demonstrated a convolutional neural network concerning 16 additional parameters for predicting the temporal waveform output of the main amplifier in a high-power laser facility, and the prediction accuracy of this model for the experimental data surpassed that of other physical models based on Frantz–Nodvik optimization. On the other hand, deep learning models exhibit robust performance in denoising tasks for one-dimensional (1D) signals, demonstrating proficiency in enhancing signal clarity[ Reference Stoller, Durand and Ewert20, Reference Yang, Zeng, Wang, Tang and Liu21]. Based on these, the LMJ facility utilized deep learning models to remove perturbations introduced in temporal waveform measurements to improve the efficiency of obtaining satisfactory pulses[ Reference Denis, Nicolaizeau, Néauport, Lacombe and Fourtillan22]. Tian et al. [ Reference Tian, Zhang, Chu, Qin, Zhang, Geng, Huang, Wang and Wang23] removed the FM-to-AM effect from temporal waveforms in a high-power laser facility using a deep learning model, which reduced the error in waveform prediction by 20%. However, detailed reports on the smoothing of temporal waveforms characterized by high contrast and the presence of multiple pre-pulses remain scarce in the existing literature. In this paper, we will conduct a study of the temporal waveforms of an injection laser system using a deep learning model.

Figure 1 The temporal pulse shaping schematic in the SG-II-up high-power laser facility.

In this paper, a U-shaped convolutional neural network with multi-scale feature extraction and an attention mechanism is constructed to achieve real-time denoising of shaping pulses with multiple shapes and high contrasts for a front-end system in the SG-II-up facility. The model not only ensures the preservation of intricate signal details but also recovers a smooth temporal waveform. A notable denoising effect, with a contrast exceeding 300:1 in simulated waveforms and 200:1 in experimental waveforms, is achieved. Besides, after denoising the simulated waveforms, the signal-to-noise ratio (SNR) is improved by more than 50%. Furthermore, it possesses a robust generalization capability, enabling it to handle unknown and complex waveforms encountered during experimentation. Finally, the stability of the proposed model using experimental waveforms is demonstrated.

2 Pulse shaping unit and temporal waveform characterization for the SG-II-up facility

2.1 The pulse shaping unit

The pulse shaping unit of the front-end system of the SG-II-up high-power laser facility, shown in Figure 1, consists of a pulse shaping unit and a closed loop with feedback for pulse shaping in the pre-amplifier system[ Reference Fan, Jiang, Wang, Wang, Huang, Lu, Wei, Li, Pan, Qiao, Wang, Cheng, Zhang, Huang, Xiao, Zhang, Li, Zhu and Lin6].

The pulse shaping unit adopts a structure combining an integrated waveguide modulator and an arbitrary waveform generator (AWG). Under the action of external trigger signals and clocks, the shaping electrical pulse and the square-wave gate pulse output from the AWG are amplified and then loaded into the two stages of the optical waveguide modulator respectively. Driven by these two electrical signals, the nanosecond square-wave laser pulse is shaped into a laser pulse output that meets specific requirements for subsequent physics experiments.

The temporal characteristics of the output optical pulse from the pulse shaping unit are intricately governed by the shaping electrical signal. A closed-loop control system is required to effectively compensate for the serious nonlinear effects caused by the electrical amplifier, enhancing both the precision and the overall efficiency of pulse shaping. After sampling the output from the regenerative amplifier, the temporal waveform is measured utilizing a photoelectric tube in conjunction with a high-speed oscilloscope. Operating under a predetermined state and with the pulse shaping system calibrated, the oscilloscope-acquired pulse waveform data are instantaneously collected by the computer in real-time. Through solving the inverse problem of the regenerative amplifier the computer derives both the regeneratively injected pulse waveform and the shaping electrical pulse from the AWG. Subsequently, the computer automatically initiates a feedback loop to adjust the shaping output of the AWG until the convergence criteria are met.

2.2 Output temporal waveform characteristics

To enhance the precision of the closed-loop control for pulse shaping, the positioning of the feedback system’s monitoring point is set after the regenerative amplifier, operating at a repetition frequency of 1 Hz. However, the monitoring process encounters problems posed by electronic noise and FM-to-AM arising from gain saturation and dispersion effects. The FM-to-AM effect in front-end systems due to group velocity dispersion (GVD) caused by long transmission fibres will be briefly described in the following[ Reference Qiao, Wang, Fan, Li, Jiang, Li, Huang and Lin24 Reference Zhang, Fan, Wang, Wang, Lu, Huang, Xu, Zhang, Sun, Jiao, Zhou and Jiang26].

For a single-frequency signal light, the injected pulse after two stages of phase modulation can be expressed as follows:

(1) $$\begin{align}{E}_1(t)={E}_0(t)\exp \left(i{\omega}_0t\right)=\sqrt{P_{\mathrm{in}}(t)}\exp \left[ i\varphi (t)\right]\exp \left(i{\omega}_0t\right),\end{align}$$

where $\varphi (t)={{m}}_1\sin \left(2\pi {f}_{\mathrm{m}1}t\right)+{{m}}_2\sin \left(2\pi {f}_{\mathrm{m}2}t\right)$ , ${\omega}_0$ is the injected pulse angular frequency, ${P}_{\mathrm{in}}(t)$ is the injected optical power, ${m}_i$ is the modulation depth and ${f}_{\mathrm{m}}$ is the modulation frequency.

Neglecting the nonlinear effect of the optical fibre, the nonlinear Schrödinger equation is expressed as follows:

(2) $$\begin{align}i\frac{\partial E}{\partial z}=-\frac{i\alpha}{2}A+\frac{\beta_2}{2}\frac{\partial^2E}{\partial {T}^2},\end{align}$$

where $E$ is the slow-variable normalized amplitude of the pulse, $\alpha$ is the fibre loss, ${\beta}_2$ is the fibre dispersion coefficient and $T$ is the relative time of the pulse envelope with respect to the group velocity, satisfying $T=t-z/{v}_{\mathrm{g}}$ .

Since the loss of the optical fibre only causes amplitude attenuation and is not frequency selective in the 0.5 nm range, the loss term in Equation (2) can be ignored. Substituting Equation (1) into Equation (2), the output optical field of the fibre can be obtained as follows:

(3) $$\begin{align}{E}_{\mathrm{out}}(t)=\exp \left[i\frac{\beta_2}{2}L\cdot \frac{\mathrm{d}^2}{\mathrm{d}{t}^2}{E}_0(T)\right]\exp \left(i{\omega}_0t-i{\beta}_0L\right),\end{align}$$

where ${\beta}_0$ is the mode propagation constant and $L$ is the length of the fibre. Rewriting Equation (3) and substituting Equation (1) gives the output optical power:

(4) $$\begin{align}{P}_{\mathrm{out}}(t)&={P}_{\mathrm{in}}(t)\Bigg|\left(1+i\frac{\beta_2}{2}L\cdot \frac{\mathrm{d}^2}{\mathrm{d}{t}^2}\right)\exp \left[{m}_1\sin \left(2\pi {f}_{\mathrm{m}1}T\right)\right.\nonumber\\&\quad\left.+{m}_2\sin \left(2\pi {f}_{\mathrm{m}2}T\right)\right]\Bigg|^2.\end{align}$$

In high-power laser drivers, the length of the transmission fibre generally does not exceed 1 km. Assuming a small value for $\left({\beta}_2L\right)/2$ and neglecting the higher-order derivative terms, the output optical power can be approximated as follows:

(5) $$\begin{align}{P}_{\mathrm{out}}(t)={P}_{\mathrm{in}}(t)\left\{1-{\beta}_2L\left[{m}_1{\omega}_1^2\sin \left({\omega}_1T\right)+{m}_2{\omega}_2^2\sin \left({\omega}_2T\right)\right]\right\}.\end{align}$$

Equation (5) reflects the FM-to-AM effect of the two modulation frequencies. Long fibre optic transmission converts the frequency modulation into amplitude modulation, which exhibits the same functional form as the frequency modulation. The modulation shows a linear relationship with ${\beta}_2L$ . When multiple modulation frequencies need to be analysed, the appropriate term can be added to $\varphi (t)$ .

Figure 2(a) shows the electrical waveform set by the AWG and Figure 2(b) shows the temporal waveform collected at the front-end system. As shown in Figure 2(b), electronic noise and FM-to-AM conspire to degrade the smoothness of the temporal waveform, particularly pronounced in high-contrast scenarios, potentially leading to erroneous closed-loop control decisions and consequently compromising overall accuracy. To mitigate these issues and achieve a smoother temporal waveform, it is necessary to implement an algorithm to remove FM-to-AM and noise in the measured data.

Figure 2 The electrical waveform and temporal waveform in the pulse shaping unit: (a) the electrical waveform set by the AWG; (b) the temporal waveform collected at the front-end system.

3 Methods

From the theory of the sparsity of the signal, assuming that the original temporal waveform obtained from the acquisition is $y\in \mathbf{R}^n$ , the ideal waveform is $x$ and the perturbation signal generated by the FM-to-AM and the noise is $v$ , then the original temporal waveform can be regarded as a linear superposition of the ideal waveform and the perturbation signal:

(6) $$\begin{align}y=x+v.\end{align}$$

Assuming that there exists the ideal transformation $\Phi$ that sparsely encodes $y$ , we have the following:

(7) $$\begin{align}C=\Phi y,\end{align}$$

where $C$ is the coefficient of $y$ in the sparse domain and $c\in \mathbf{R}^m$ , usually with $m<n$ . In the $\Phi$ -domain, the perturbed signal is represented by small-valued coefficients, while the effective signal is expressed by large-valued coefficients. Then the smooth temporal waveform and the perturbed signal can be distinguished by the value of the coefficients[ Reference Zhang, Xu, Yang, Li and Zhang27]. Define the existence of an operator, then there will be the following:

(8) $$\begin{align}T(c)=T\left(\Phi y\right)=T\left(\Phi x+\Phi v\right)=\Phi \overset{\frown }{x}.\end{align}$$

The following optimization function is used to solve the sparse transform $\Phi$ and $T$ :

(9) $$\begin{align}\arg \min {\left\Vert x-{\Phi}^HT\Phi (y)\right\Vert}^2,\end{align}$$

where ${\Phi}^HT\Phi (y)$ is the denoised signal $\overset{\frown }{x}$ . The powerful nonlinear mapping capability of the neural network can model ${\Phi}^HT\Phi$ as $H\left(\theta \right)$ , in which $\theta$ are the parameters of the network. In the model, the gradient descent algorithm is used to continuously update the parameters $\theta$ , and the process of data-driven learning enables the model to map the original temporal waveforms to the optimal feature space, so that the model converges to or approximates the optimal solution. In this paper, a 1D convolutional network model is used for the denoising task of temporal waveforms.

3.1 Convolutional neural network model structure

ICF physics experiments require temporal waveforms with a certain contrast or pre-pulses, which are highly susceptible to being masked by heavy modulation and noise. Therefore, the denoising of the temporal waveforms is achieved while focusing on the details of the signal. The structure of the denoising model we propose is shown in Figure 3, and the main framework is a U-net in the field of image segmentation[ Reference Ronneberger, Fischer and Brox28], in which each module adopts a 1D structure.

Figure 3 Convolutional neural network model structure.

As shown in Figure 3(a), the model is divided into 11 modules, with stages 1–5 as encoders, stages 6–9 as decoders, stage 11 as a long short-term memory (LSTM) network[ Reference Simone and Saleh29] and stage 12 as an output convolutional layer. To enhance the robustness and denoising capability of the model, a multi-scale feature extraction module and a channel attention module are added to the U-shaped network.

Figure 3(b) gives the structure of the multi-scale feature extraction module, which consists of residual learning and the inception[ Reference Szegedy, Vanhoucke, Ioffe, Shlens and Wojna30] architecture. A signal of size $H\times 1$ is input into a convolutional layer, which employs a convolutional kernel of size 3, utilizes a padding of 1 and has a stride of 1, alongside $C$ channels. This results in a feature signal of dimensions $H\times C$ being generated as output. Subsequently, the feature signal passes through four parallel branch modules. Each branch initially reduces the number of channels of the feature signal to $C/4$ using a $1\times 1$ convolution. Then, three of these branches proceed through convolutional layers of three different scales, with kernel sizes of 3, 5 and 7, respectively. The padding numbers for these convolutional layers are 1, 2 and 3, respectively, and the number of channels remains $C/4$ for each of these layers. After each of the four branch structures passes through a channel attention module, they are summed together along the channel dimension, resulting in an output feature signal with a length of $H\times C$ . This process expands the feature mapping space and achieves multi-scale feature fusion. Finally, this feature signal is added to the input signal, yielding the output of the multi-scale feature module. The vertically stacked residual blocks increase the depth of the network, while the horizontally extended inception structure enhances the width of the network by leveraging convolutional kernels of different scales. This combination more effectively learns detailed features from the temporal waveform.

Figure 4 Typical simulated waveform data.

Figure 3(c) illustrates the structure of a 1D channel attention module[ Reference Hu, Shen, Albanie, Sun and Wu31], which takes a feature signal of length $L$ and $C$ channels as input. By applying a global average pooling operation, the signal’s spatial dimension is compressed to $1\times C$ , enabling information fusion. Subsequently, the compressed signal passes through two fully connected layers: a dimension-reducing layer that outputs a $1\times C/t$ vector, followed by a rectified linear unit (ReLU) activation function. This is then followed by a dimension-increasing layer that expands the vector back to $1\times C$ . A sigmoid activation function is applied to obtain a set of weights within the range (0, 1) of size $1\times C$ . Finally, each channel of the input feature signal is multiplied by its corresponding weight to yield a new feature signal. The application of the squeeze-and-excitation (SE)[ Reference Hu, Shen, Albanie, Sun and Wu31] module relies on the interdependencies among channels to recalibrate the feature responses of the signal.

Each stage in the encoder comprises two convolutional layers. The first convolutional layer uses a kernel size of 3 with a padding and stride of 1, and the number of kernels increases progressively from 64, 128, 256 and 512 to 1024. The second convolutional layer is a multi-scale feature extraction module. The encoder’s primary function is feature extraction. Between these five modules, pooling layers are utilized to progressively downsample the signal, resulting in a halving of the length of the 1D signal and a doubling of the number of channels after each downsampling operation. When the input signal has dimensions of $H\times 1$ , the encoder outputs a feature signal with dimensions of $H/6\times 1024$ . Each module in the decoder contains three convolutional layers. The first convolutional layer comprises a transpose convolutional layer and a channel attention module. The second and third convolutional layers use a kernel size of 3 with a padding and stride of 1, and the number of kernels decreases progressively from 512, 256 and 128 to 64. The decoder’s role is feature fusion, which is utilized for mapping sparse representations to high-dimensional nonlinear outputs. Starting from the output of the encoder, the modules utilize 1D transposed convolutional layers with a kernel size and stride of 2, and a padding of 1, to progressively upsample the signal. After each upsampling operation, the length of the 1D signal doubles, while the number of channels is halved. Following this upsampling, a channel attention module is applied. The number of kernels in the transposed convolutional layers of each module is 512, 256, 128 and 64, respectively. Corresponding encoder and decoder layers are concatenated along the channel, and the output signal from the decoder has dimensions of $H\times 64$ . The activation function used in the convolutional layers is the ReLU function.

The output signal from the U-shaped network passes through an LSTM layer with 32 hidden units, followed by a convolutional layer with the kernel size, number of kernels, padding and stride all set to 1. The output is a denoised signal with the same size as the input signal ( $H\times 1$ ).

3.2 Dataset

During model training, it is imperative to have data labels that represent the ideal temporal waveform, devoid of FM-to-AM and noise signals. However, in current experiments, acquiring such an ideal temporal waveform proves elusive. Consequently, we resort to numerical simulation as a means to construct the simulation dataset. The temporal waveforms with FM-to-AM are generated using Equation (5), and random and sinusoidal noises are added to it:

(10) $$\begin{align}{P}_{\mathrm{out}}(t)&={P}_{\mathrm{in}}(t)\kern-1pt\left\{\kern-1pt 1-{\beta}_2L\left[{m}_1{\omega}_1^2\sin \left({\omega}_1T\right)+{m}_2{\omega}_2^2\sin \left({\omega}_2T\right)\right]\kern-1pt\right\}\nonumber\\&\quad+{P}_{\mathrm{noise}}(t),\end{align}$$

where ${P}_{\mathrm{out}}(t)$ is the waveform with the perturbed signal, ${P}_{\mathrm{in}}(t)$ is the ideal waveform and ${P}_{\mathrm{noise}}(t)$ is the added noise.

To bolster the generalization performance of the model, the simulation dataset encompasses a diverse array of temporal waveforms, including square waves, exponentially shaped pulses with varying contrasts and shaping pulses preceded by pre-pulses. These waveforms are sampled at 6 ns, with a sampling interval of 2.3 ps. By utilizing random number generation, we introduced variations in the pulse width, contrast, FM-to-AM and noise magnitude of the temporal waveforms within the dataset, ensuring a diverse range of conditions for training. The dataset consists of a total of 13,000 pairs of ideal waveforms and waveforms with perturbed signals, and the dataset is divided into training and validation sets in the ratio of 8:2. Figure 4 shows some of the temporal waveforms in the dataset.

3.3 Model training

The training of the model is grounded in the minimization of a loss function, which assesses the discrepancy between the ideal waveform, denoted as $x$ , and the denoised waveform, denoted as $\overset{\frown }{x}$ . Specifically, the reconstruction loss is quantified as the root mean square error (RMSE) between these two waveforms, providing a metric for optimizing the model’s performance in restoring the original signal characteristics:

(11) $$\begin{align}\mathrm{Loss}1=\sqrt{\frac{1}{N}\sum \limits_{i=1}^N{\left({x}_i-{\overset{\frown }{x}}_i\right)}^2},\end{align}$$

where $i$ is the $i$ th sampling point of the waveforms.

Considering the need for the temporal waveform to exhibit smooth characteristics and retain detailed signals, we define the smoothness loss as follows:

(12) $$\begin{align}\mathrm{Loss}2=\sqrt{\frac{1}{N-1}\sum \limits_{i=2}^{N-1}{\left(\left|{x}_{i+1}-{x}_i\right|-\left|{\overset{\frown }{x}}_{i+1}-{\overset{\frown }{x}}_i\right|\right)}^2}.\end{align}$$

The objective loss function for the whole model is as follows:

(13) $$\begin{align}\mathrm{Loss}=\alpha \cdot \mathrm{Loss}1+\beta \cdot \mathrm{Loss}2,\end{align}$$

where $\alpha$ and $\beta$ are the equilibrium coefficients between the two loss functions. Here, $\alpha$ and $\beta$ are taken as 0.3 and 0.7, respectively, in the experiment.

The model employs the Adam optimizer to iteratively refine and optimize its weights, initiating with a learning rate of 0.001. To ensure efficient convergence, an exponential decay strategy is implemented, where the learning rate for each subsequent iteration is reduced to 97.5% of the previous iteration’s value, employing a decay coefficient of 0.975. With a batch size of 160 samples per training epoch, the model undergoes 300 epochs of training. Figure 5 visually depicts the progression of the loss function for both the training and validation sets, clearly indicating that the model has achieved convergence.

Figure 5 The progression of the loss function for both the training and validation sets.

4 Results and discussion

In this part, we use the trained model for the denoising task of temporal waveforms obtained from simulations and experimental measurements. Besides, the RMSE, pulse contrast, rising edge time and SNR are used to analyse and discuss the results.

4.1 Results of simulated waveforms

To cater to the actual operational requirements of the facility, a diverse array of shaping pulses incorporating FM-to-AM signals and noise is simulated. Notably, none of the simulated waveforms utilized for the subsequent analysis are present within the given dataset.

4.1.1 Simulated waveforms with different contrasts

In the ICF high-power laser driver, due to the gain saturation effect of the system and the target compression process of physical experiments, the front-end system is required to output nanosecond laser pulses with a certain contrast up to a few hundred to one. We simulate temporal waveforms incorporating FM-to-AM and noise, with contrast spanning from 5:1 to 330:1. Subsequently, we apply the trained neural network model to denoise the simulated data.

Figures 6(a)6(c) and 6(g)6(i) show the simulated waveforms incorporating FM-to-AM and electronic noise, which are the inputs of the model. Figures 6(d)6(f) and 6(j)6(l) show the model outputs and their corresponding pure waveforms. As demonstrated in the figure, the output temporal waveforms, following model processing, exhibit a remarkable smoothness, remaining virtually unscathed by the influence of modulation and noise signals. In order to clearly present the detailed recovery of the signal, we also draw partially enlarged views of the temporal waveforms in Figure 6. When the contrast of the temporal waveform is significant, the intensity of the leading edge of the temporal waveform tends to be obscured by noise and modulation signals. Our methods can be well implemented to recover the intensity of the leading edge of the temporal waveform. When the contrast is greater than 300:1 (Figure 6(i)), the model also guarantees a smoothing output temporal waveform (Figure 6(l)), ensuring that the intensity of the leading edge is effectively restored and preserved, thus optimizing overall waveform quality.

Figure 6 The simulated waveforms and corresponding denoising results obtained by the model: (a)–(c), (g)–(i) input waveforms; (d)–(f), (j)–(l) denoised waveforms and ideal waveforms.

To further analyse the performance of the model, we calculate the RMSE and SNR between the denoised and ideal waveforms, and compare the rising edge time and contrast between the ideal and denoised waveforms. The expression for the SNR is defined as follows:

(14) $$\begin{align}\mathrm{SNR}=10\log \frac{\sum \limits_{i=1}^N{y_i}^2}{\sum \limits_{i=1}^N{\left({\overset{\frown }{x}}_i-{y}_i\right)}^2},\end{align}$$

where ${y}_i$ is the ideal signal, without denoising the waveform, ${\overset{\frown }{x}}_i$ is the signal with noise and, after denoising the waveform, ${\overset{\frown }{x}}_i$ is the denoised signal.

The extent of recovery of the overall signal characteristics can be quantified using the RMSE and SNR, while the temporal waveform contrast and rising edge temporal serve as metrics to characterize the detail recovery within the signal. The results of the calculations are shown in Table 1. The RMSE values between the denoised waveforms and the ideal waveforms are all less than 0.50%, and the SNRs are all improved by more than 50%. The model removes modulation and noise while the overall trend of the temporal waveform is preserved. The relative error of the waveform contrast recovery is kept within 1% and the rising edge time is also within 100 ps, which indicates that the details of the temporal waveform are also well recovered.

Table 1 Evaluation of model performance indicators.

Figure 7 The simulated waveforms and corresponding denoising results obtained by the model: (a) input waveform; (b) output waveform of one model run and the ideal waveform; (c) output waveform of three model runs and the ideal waveform.

Table 2 Comparison of model performance with different numbers of calculations.

4.1.2 Simulated waveforms with significant noise

To verify the model’s capability of removing significant FM-to-AM and noise from temporal waveforms, we simulate a temporal waveform with the contrast of 18.5:1 and the FM-to-AM of 45%, and further introduce substantial random noise and sinusoidal noise into it, as shown in Figure 7(a).

Figure 7(b) shows the model’s output, where it effectively eliminates the majority of the modulation and noise signals. However, a discernible modulation remains evident, indicating that there is still room for enhancement to further smoothen the waveform and achieve optimal results. We apply the model to compute the temporal waveform of Figure 7(a) in an iterative manner, three times in total. Specifically, the output from each preceding iteration is utilized as the input for the subsequent model run. The cumulative results, as presented in Figure 7(c), demonstrate a notable improvement in the smoothing of the temporal waveform, indicating the effectiveness of this iterative approach. Table 2 summarizes the comparative results between a single calculation and three iterations of calculations using the model. When the model is applied three times iteratively, the RMSE significantly drops from 0.50% to 0.27%, and the contrast and rising edge time are closer to the ideal situation. Furthermore, the SNR experiences a near 15% improvement. On a PC with 11th Gen Intel (R) Core (TM) i7-11700 (2.50 GHz), the times for a single calculation and three iterations of calculations are 1.96 and 3.95 s, respectively.

Figure 8 shows the performance metrics of the denoised waveforms, specifically depicting how these metrics vary when the model is computed for one to five iterations, respectively. Figure 8(a) shows the RMSE and SNR of the model outputs for these scenarios, Figure 8(b) plots the contrast of the waveforms of the model outputs for these scenarios, and Figure 8(c) is the time required by the model to calculate from one to five times. The results indicate a significant enhancement in all the metrics of the model outputs when calculated twice, with the optimal outcome achieved after three iterations. For the time efficiency, the incremental time cost associated with multiple model computations remains minimal. Furthermore, the computation time is poised for further reduction when the model is deployed in the graphics processing unit (GPU) environment.

Figure 8 Model performance with different numbers of calculations: (a) RMSE and SNR; (b) errors in the contrast of the waveforms; (c) time of the calculations.

Therefore, when dealing with temporal waveforms that exhibit significant FM-to-AM and noise, without any additional parameter modifications, we can optimize the smoothing effect by selecting an appropriate number of model calculations based on the specific requirements. This ensures that the best possible waveform smoothing is achieved.

4.1.3 Simulated complex waveforms

ICF physics experiments frequently necessitate the utilization of specific composite pulses. To evaluate the model’s generalization capabilities, we devise an intricate pulse incorporating multiple step signals, while also integrating FM-to-AM and noise, as shown in Figures 9(a)9(c).

Figure 9 Comparison of complex input waveforms and denoised waveforms obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms and ideal waveforms.

Figures 9(d)9(f) show the output results of these three pulses processed by the model, in which both the modulation signal and the noise have been removed while the detailed part of the waveform has been better preserved. Table 3 illustrates the substantial denoising capability achieved for the three waveforms, demonstrating an RMSE of less than 0.5%, a remarkable proximity to the ideal scenario in terms of contrast, and a notable enhancement in the SNR by 97%, 78% and 98%, respectively. The slightly higher RMSE observed in Figures 9(b) and 9(c) primarily stems from the presence of multiple rising edges within the pulse. The denoising algorithm, when applied, tends to produce smoother transitions at these edges, whereas the numerically simulated ideal pulse exhibits abrupt, steeper changes at these points, resulting in a disparity between the two.

Table 3 Evaluation of model performance indicators (complex waveforms).

Figure 10 The temporal waveforms of the experiment and corresponding denoising results obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms.

Based on the above results and analysis, the model demonstrates robust generalizability, effectively achieving real-time denoising of temporal waveforms varying in contrast, shape and noise levels, all while preserving the essential details of the waveforms. In the subsequent section, we will analyse the model’s denoising proficiency when applying it to temporal waveforms derived from experimental acquisitions.

4.2 Results of experimental waveforms

In the experiment, the temporal waveforms are measured by a high-speed photodetector and a 30 GHz oscilloscope at 80 GHz sampling in the front-end system of the SG-II-up facility according to the structure shown in Figure 1. All experimental waveforms will be normalized to the maximum value before being input to the model.

4.2.1 Experimental waveforms with different contrasts

Figures 10(a)10(c) show the collected temporal waveforms, each displaying varying contrast. In contrast, Figures 10(d)10(f) present the outputs generated by the model, where the acquired temporal waveforms have undergone trend smoothing, effectively eliminating modulation and noise. Notably, the contrasts of these processed temporal waveforms are significantly enhanced, achieving contrasts of 10:1, 70:1 and 250:1, respectively. As shown in Figures 10(c) and 10(f), the proposed model is able to recover the rising edge of the high-contrast temporal waveform that was originally drowned in noise, which will make it possible to improve the accuracy and efficiency of the pulse shaping closed-loop system.

4.2.2 Experimental waveforms with different shapes

To further validate the applicability of the proposed model, we analyse three additional temporal waveforms with diverse shapes, as shown in Figures 11(a)11(c). The results computed by our model demonstrate its efficacy in effectively eliminating modulation and noise signals from these waveforms. Notably, the temporal waveform in Figure 11(d) not only successfully retains the pre-pulse signal but also remarkably restores the intricate details of the pre-pulse’s leading-edge intensity, showing the model’s precision and versatility. The distortion evident in the leading segments of the temporal waveforms presented in Figures 11(e) and 11(f) is caused by the nonlinear behaviour of the electrical amplifier. However, the denoised signal emerging from our model maintains the underlying waveform trend obtained from the measurements, owing to the consideration of FM-to-AM effects. This results in the noise oscillating around the smoothed signal, enabling the model to reconstruct both the smoothed and intricate contours of the temporal waveform. To restore partially distorted signals within the temporal waveform, employing a higher-quality electrical amplifier or utilizing transfer function calculations for signal inversion would be necessary.

Figure 11 Different types of temporal waveforms of the experiment and corresponding denoising results obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms.

4.2.3 Analysis of the model stability

In order to analyse the stability of the model, we use the model to process the temporal waveforms under the same experimental conditions.

Figure 12 shows five electrical waveforms with the same contrast from the AWG output. Figure 13(a) shows the regenerative amplified output temporal waveforms, which are used as inputs to the model, and the denoised waveforms obtained are shown in Figure 13(b). From the results, it can be seen that the modulation signals of these five temporal waveforms are effectively removed, and the intensity within both the leading and trailing edges of the waveform individually maintains a good level of consistency. After calculations, we derive the contrast for these five temporal waveforms, with the outcomes presented in Figure 13(c). Notably, all computed results adhere to the range between 20:1 and 21:1, demonstrating the stability exhibited by the model in processing the experimental outcomes. Due to reasons such as nonlinear effects and measurements of electrical amplifiers, the temporal waveforms obtained at the regenerative amplification are acquired with varying degrees of deviation in the intensity of the leading edge and the rise time, even when the AWG inputs waveforms of the same contrast.

Figure 12 Electric waveforms with the same contrast set by the AWG.

Figure 13 The temporal waveforms of the experiment and corresponding denoised results obtained by the model: (a) input waveforms; (b) denoised waveforms; (c) contrast of five denoised temporal waveforms.

The denoising results of the simulation and experimental results show the robust generalization capability of the model trained with simulated datasets. The trained model can handle temporal waveforms with multiple modulation frequencies and high contrast that are more complex than the simulated data. When the FM-to-AM and noise of the temporal waveforms are significant, it can also achieve the expected results through multiple model calculations. Remarkably, this is achieved without the readjusting parameters for specific waveform characteristics, as is often the case with traditional methods[ Reference Pati, Rezaiifar and Krishnaprasad10 Reference Qian, Fan, Lu and Wang13]. The research results are expected to meet the needs of different physics experiments for a variety of specific temporal waveforms.

5 Conclusions

In summary, a U-shaped convolutional neural network that incorporates multi-scale feature extraction, an attention mechanism and LSTM units is constructed to achieve real-time denoising of shaping pulses with multiple shapes and high contrasts for the front-end system in the SG-II-up facility. Based on the temporal pulse shaping schematic in the SG-II serial high-power laser facility, we analyse the FM-to-AM characteristics of the temporal waveforms and produce a simulation dataset for model training and testing. Taking shaped pulses with FM-to-AM and noise as input, it outputs smooth waveforms without FM-to-AM and noise by the proposed model. By training on a simulated dataset, the model achieves real-time denoising of various types of shaped pulses. For temporal domain waveforms that contain significant noise and high FM-to-AM (>50%), high contrast ratios (>300:1) and pre-pulses, along with multi-step or multi-level transitions, the proposed model achieves remarkable performance. For significant FM-to-AM and noise, the number of model calculations can be increased appropriately to improve the recovery of denoised waveforms without any additional parameter modifications. Specifically, the relative errors in both the RMSE and contrast are maintained below 1%, while the SNR is improved by over 50%. This model not only preserves the critical features of the waveforms but also achieves high-precision denoising and demodulation. During the experiments, denoised waveforms with contrast exceeding 200:1 and various shapes are successfully obtained. The stability of the model is validated using temporal waveforms with identical pulse widths and contrast, demonstrating that it could produce smooth temporal waveforms while preserving the fine details of the signals. The research findings indicate that the denoising model trained on simulated datasets exhibits excellent generalization capabilities, enabling its application to denoise experimental waveforms. Even for unknown complex waveforms, the model achieved satisfactory processing results. This approach has the potential to significantly enhance the accuracy and efficiency of closed-loop control systems by effectively suppressing the influence of electronic noise and FM-to-AM on time–power curves. By mitigating these disturbances, the model contributes to improving the overall performance and reliability of the control systems in various applications.

The research further validates the intelligent application of deep learning models in high-power laser facilities. The current model is used to directly recognize and restore temporal waveforms in a single step. Future research will focus on correction of signal distortion due to nonlinear effects in electrical amplifiers and analysis of the modulation signal based on deep learning. Such methods would significantly broaden deep learning models’ applicability and effectiveness in high-power laser facilities, which will provide important support for their sophisticated regulation and analysis.

Acknowledgements

This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA25020303) and operation of the SG-II facility.

References

Moses, E. I., Lindl, J. D., Spaeth, M. L., Patterson, R. W., Sawicki, R. H., Atherton, L. J., Baisden, P. A., Lagin, L. J., Larson, D. W., MacGowan, B. J., Miller, G. H., Rardin, D. C., Roberts, V. S., Van Wonterghem, B. M., and Wegner, P. J., Fusion Eng. Des. 69, 1 (2016).Google Scholar
Spaeth, M. L., Manes, K. R., Kalantar, D. H., Miller, P. E., Heebner, J. E., Bliss, E. S., Spec, D. R., Parham, T. G., Whitman, P. K., Wegner, P. J., Baisden, P. A., Menapace, J. A., Bowers, M. W., Cohen, S. J., Suratwala, T. I., Di Nicola, J. M., Newton, M. A., Adams, J. J., Trenholme, J. B., Finucane, R. G., Bonanno, R. E., Rardin, D. C., Arnold, P. A., Dixit, S. N., Erbert, G. V., Erlandson, A. C., Fair, J. E., Feigenbaum, E., Gourdin, W. H., Hawley, R. A., Honig, J., House, R. K., Jancaitis, K. S., LaFortune, K. N., Larson, D. W., Le Galloudec, B. J., Lindl, J. D., MacGowan, B. J., Marshall, C. D., McCandless, K. P., McCracken, R. W., Montesanti, R. C., Moses, E. I., Nostrand, M. C., Pryatel, J. A., Roberts, V. S., Rodriguez, S. B., Rowe, A. W., Sacks, R. A., Salmon, J. T., Shaw, M. J., Sommer, S., Stolz, C. J., Tietbohl, G. L., Widmayer, C. C., and Zacharias, R., Fusion Eng. Des. 69, 25 (2016).Google Scholar
Spaeth, M. L., Manes, K. R., Bowers, M., Celliers, P., Di Nicola, J.-M., Di Nicola, P., Dixit, S., Erbert, G., Heebner, J., Kalantar, D., Landen, O., MacGowan, B., Van Wonterghem, B., Wegner, P., Widmayer, C., and Yang, S., Fusion Eng. Des. 69, 366 (2016).Google Scholar
Denis, V., Beau, V., Deroff, L. L., Lacampagne, L., Chies, T., Julien, X., Bordenave, E., Lacombe, C., Vermersch, S., and Airiau, J. P., Proc. SPIE 10084, 100840I (2017).Google Scholar
Miquel, J. L., Lion, C., and Vivini, P., J. Phys. Conf. Ser. 688, 012067 (2013).Google Scholar
Fan, W., Jiang, Y. E., Wang, J. F., Wang, X. C., Huang, D. J., Lu, X. H., Wei, H., Li, G. Y., Pan, X., Qiao, Z., Wang, C., Cheng, H., Zhang, P., Huang, W. F., Xiao, Z. L., Zhang, S. J., Li, X. C., Zhu, J. Q., and Lin, Z. Q., High Power Laser Sci. Eng. 6, e34 (2018).CrossRefGoogle Scholar
Li, P., Wang, W., Jin, S., Huang, W. Q., Wang, W. Y., Su, J. Q., and Zhao, R. C., Laser Phys. 28, 045004 (2018).Google Scholar
Rothenberg, J. E., Browning, D. F., and Wilcox, R. B., Proc. SPIE 3492, 51 (1998).Google Scholar
Penninckx, D., Beck, N., Gleyze, J. F., and Laurent, V., J. Lightwave Technol. 24, 4197 (2006).CrossRefGoogle Scholar
Pati, Y. C., Rezaiifar, R., and Krishnaprasad, P. S., in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers (1993), p. 40.CrossRefGoogle Scholar
Huang, C. H., Lu, X. H., Jiang, Y. E., Wang, X. C., Qiao, Z., and Fan, W., Appl. Opt. 56, 1610 (2017).CrossRefGoogle Scholar
Zhang, Z. T., Zhu, J. J., Kuang, C. L., and Ke, Y. C., J. Geodesy Geodyn. 34, 128 (2014).Google Scholar
Qian, X. L., Fan, W., Lu, X. H., and Wang, X. C., High Power Laser Sci. Eng. 9, e15 (2021).CrossRefGoogle Scholar
Li, Z. W., Liu, F., Yang, W. J., Peng, S. H., and Zhou, J., IEEE Trans. Neural Networks Learn. Syst. 33, 6999 (2021).CrossRefGoogle Scholar
Leach, W., Henrikson, J., Hatarik, R., Liebman, J., Mundhenk, N., Palmer, N., and Rever, M., Proc. SPIE 10898, 108980I (2019).Google Scholar
Döpp, A., Eberle, C., Howard, S., Irshad, F., and Lin, J. P., High Power Laser Sci. Eng. 11, e55 (2023).CrossRefGoogle Scholar
Luo, J., Tian, Z. Y., Li, L., Ni, Z. G., Xie, X. Q., and Zhou, X. W., Fusion Eng. Des. 194, 113888 (2023).CrossRefGoogle Scholar
Liao, Y. Z., Huang, X. X., Geng, Y. C., Yuan, Q., and Hu, D. X., Photonics 10, 1244 (2023).CrossRefGoogle Scholar
Zou, L., Geng, Y. C., Liu, B. B., Chen, F. D., Zhou, W., Peng, Z. T., Hu, D. X., Yuan, Q., Liu, G. D., and Liu, L. Q., Opt. Express. 30, 29885 (2022).CrossRefGoogle Scholar
Stoller, D., Durand, S., and Ewert, S., in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2019), p. 181.Google Scholar
Yang, G. J., Zeng, K. Y., Wang, L., Tang, M., and Liu, D. M., Opt. Express 30, 34453 (2022).CrossRefGoogle Scholar
Denis, V., Nicolaizeau, M., Néauport, J., Lacombe, C., and Fourtillan, P., Proc. SPIE 11666, 1166603 (2021).Google Scholar
Tian, Z. Y., Zhang, Z. H., Chu, X. K., Qin, Y., Zhang, Q., Geng, Y. C., Huang, X. X., Wang, W. Y., and Wang, A., J. Phys. Conf. Ser. 1453, 012088 (2020).Google Scholar
Qiao, Z., Wang, X. C., Fan, W., Li, X. C., Jiang, Y. E., Li, R., Huang, C. H., and Lin, Z. Q., Appl. Opt. 55, 8352 (2016).CrossRefGoogle Scholar
25. Li, R., Fan, W., Jiang, Y. E., Qiao, Z., Zhang, P., and Lin, Z. Q., Appl. Opt. 56, 993 (2017).CrossRefGoogle Scholar
Zhang, Y. J., Fan, W., Wang, J. F., Wang, X. C., Lu, X. H., Huang, D. J., Xu, S. Y., Zhang, Y. L., Sun, M. Y., Jiao, Z. Y., Zhou, S. L., and Jiang, X. Q., High Power Laser Sci. Eng. 12, e9 (2024).CrossRefGoogle Scholar
Zhang, Z., Xu, Y., Yang, J., Li, X., and Zhang, D., IEEE Access 3, 490 (2015).CrossRefGoogle Scholar
Ronneberger, O., Fischer, P., and Brox, T., in Medical Image Computing and Computer-Assisted Intervention–MICCAI (2015), p. 234.Google Scholar
Simone, L. and Saleh, M. F., Opt. Express 32, 5582 (2024).Google Scholar
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), p. 2818.Google Scholar
Hu, J., Shen, L., Albanie, S., Sun, G., and Wu, E., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), p. 7132.Google Scholar
Figure 0

Figure 1 The temporal pulse shaping schematic in the SG-II-up high-power laser facility.

Figure 1

Figure 2 The electrical waveform and temporal waveform in the pulse shaping unit: (a) the electrical waveform set by the AWG; (b) the temporal waveform collected at the front-end system.

Figure 2

Figure 3 Convolutional neural network model structure.

Figure 3

Figure 4 Typical simulated waveform data.

Figure 4

Figure 5 The progression of the loss function for both the training and validation sets.

Figure 5

Figure 6 The simulated waveforms and corresponding denoising results obtained by the model: (a)–(c), (g)–(i) input waveforms; (d)–(f), (j)–(l) denoised waveforms and ideal waveforms.

Figure 6

Table 1 Evaluation of model performance indicators.

Figure 7

Figure 7 The simulated waveforms and corresponding denoising results obtained by the model: (a) input waveform; (b) output waveform of one model run and the ideal waveform; (c) output waveform of three model runs and the ideal waveform.

Figure 8

Table 2 Comparison of model performance with different numbers of calculations.

Figure 9

Figure 8 Model performance with different numbers of calculations: (a) RMSE and SNR; (b) errors in the contrast of the waveforms; (c) time of the calculations.

Figure 10

Figure 9 Comparison of complex input waveforms and denoised waveforms obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms and ideal waveforms.

Figure 11

Table 3 Evaluation of model performance indicators (complex waveforms).

Figure 12

Figure 10 The temporal waveforms of the experiment and corresponding denoising results obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms.

Figure 13

Figure 11 Different types of temporal waveforms of the experiment and corresponding denoising results obtained by the model: (a)–(c) input waveforms; (d)–(f) denoised waveforms.

Figure 14

Figure 12 Electric waveforms with the same contrast set by the AWG.

Figure 15

Figure 13 The temporal waveforms of the experiment and corresponding denoised results obtained by the model: (a) input waveforms; (b) denoised waveforms; (c) contrast of five denoised temporal waveforms.