Burgers' equation is a nonlinear partial differential equation that simulates the propagation and reflection of shock waves. It is widely used in the fields of fluid mechanics, nonlinear acoustics, gas dynamics et al. It is named after Johannes Martins Hamburg (1895-1981).
The 1-d Burgers’ equation applications include modeling the one dimensional flow of a viscous fluid. It takes the form
$$ \partial_t u(x, t)+\partial_x (u^2(x, t)/2)=\nu \partial_{xx} u(x, t), \quad x \in(0,1), t \in(0, 1] $$
$$ u(x, 0)=u_0(x), \quad x \in(0,1) $$
where $u$ is the velocity field, $u_0$ is the initial condition and $\nu$ is the viscosity coefficient.
We aim to learn the operator mapping the initial condition to the solution at time one:
$$ u_0 \mapsto u(\cdot, 1) $$
The following figure shows the architecture of the Koopman Neural Operator, which contains the upper and lower main branches and corresponding outputs. In the figure, Input represents the initial vorticity. In the upper branch, the input vector is lifted to higher dimension channel space by the Encoding layer. Then the mapping result is used as the input of the Koopman layer to perform nonlinear transformation of the frequency domain information. Finally, the Decoding layer maps the transformation result to the Prediction. At the same time, the lower branch does high-dimensional mapping of the input vector through the Encoding Layer, and then reconstructs the input through the Decoding Layer. The Encoding layers of the upper and lower branches share the weight, and the Decoding layers share the weight too. Prediction is used to calculate the prediction error with Label, and Reconstruction is used to calculate the reconstruction error with Input. The two errors together guide the gradient calculation of the model.
The Koopman Neural Operator consists of the Encoding Layer, Koopman Layers, Decoding Layer and two branches.
The Koopman Layer is shown in the dotted box, which could be repeated. Start from input: apply the Fourier transform(FFT); apply a linear transformation on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform(iFFT). Then the output is added into input. Finally, the Koopman Layer output vector is obtained through the activation function.
You can download dataset from data_driven/airfoil/2D_steady for model evaluation. Save these dataset at ./dataset
.
train.py
from command linepython train.py --config_file_path ./configs/kno1d.yaml --mode GRAPH --device_target Ascend --device_id 0
where:
--config_file_path
indicates the path of the parameter file. Default './configs/kno1d.yaml';
--device_target
indicates the computing platform. You can choose 'Ascend' or 'GPU'. Default 'Ascend'.
--device_id
indicates the index of NPU or GPU. Default 0.
--mode
is the running mode. 'GRAPH' indicates static graph mode. 'PYNATIVE' indicates dynamic graph mode.
You can run the training and validation code line by line using the Chinese or English version of the Jupyter Notebook Chinese Version and English Version.
Take 6 samples, and do 10 consecutive steps of prediction. Visualize the prediction as follows.
Parameter | Ascend | GPU |
---|---|---|
Hardware | Ascend 910A, 32G;CPU: 2.6GHz, 192 cores | NVIDIA V100 32G |
MindSpore version | 2.0.0 | 2.0.0 |
train loss | 3e-5 | 3e-5 |
valid loss | 3e-3 | 3e-3 |
speed | 2s/epoch | 7s/epoch |
gitee id:dyonghan
email: dyonghan@qq.com
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。