208 Star 856 Fork 632

GVPMindSpore / mindscience

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
README.md 4.31 KB
一键复制 编辑 原始数据 按行查看 历史
Brian-K 提交于 2023-08-30 17:10 . [fix]

Koopman Neural Operator Solves 1D Burgers Equation

Overview

Problem Description

Burgers' equation is a nonlinear partial differential equation that simulates the propagation and reflection of shock waves. It is widely used in the fields of fluid mechanics, nonlinear acoustics, gas dynamics et al. It is named after Johannes Martins Hamburg (1895-1981).

The 1-d Burgers’ equation applications include modeling the one dimensional flow of a viscous fluid. It takes the form

$$ \partial_t u(x, t)+\partial_x (u^2(x, t)/2)=\nu \partial_{xx} u(x, t), \quad x \in(0,1), t \in(0, 1] $$

$$ u(x, 0)=u_0(x), \quad x \in(0,1) $$

where $u$ is the velocity field, $u_0$ is the initial condition and $\nu$ is the viscosity coefficient.

We aim to learn the operator mapping the initial condition to the solution at time one:

$$ u_0 \mapsto u(\cdot, 1) $$

Technical Path

The following figure shows the architecture of the Koopman Neural Operator, which contains the upper and lower main branches and corresponding outputs. In the figure, Input represents the initial vorticity. In the upper branch, the input vector is lifted to higher dimension channel space by the Encoding layer. Then the mapping result is used as the input of the Koopman layer to perform nonlinear transformation of the frequency domain information. Finally, the Decoding layer maps the transformation result to the Prediction. At the same time, the lower branch does high-dimensional mapping of the input vector through the Encoding Layer, and then reconstructs the input through the Decoding Layer. The Encoding layers of the upper and lower branches share the weight, and the Decoding layers share the weight too. Prediction is used to calculate the prediction error with Label, and Reconstruction is used to calculate the reconstruction error with Input. The two errors together guide the gradient calculation of the model.

The Koopman Neural Operator consists of the Encoding Layer, Koopman Layers, Decoding Layer and two branches.

The Koopman Layer is shown in the dotted box, which could be repeated. Start from input: apply the Fourier transform(FFT); apply a linear transformation on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform(iFFT). Then the output is added into input. Finally, the Koopman Layer output vector is obtained through the activation function.

Koopman Layer structure

QuickStart

You can download dataset from data_driven/airfoil/2D_steady for model evaluation. Save these dataset at ./dataset.

Run Method 1: Call train.py from command line

python train.py --config_file_path ./configs/kno1d.yaml --mode GRAPH --device_target Ascend --device_id 0

where:

--config_file_path indicates the path of the parameter file. Default './configs/kno1d.yaml';

--device_target indicates the computing platform. You can choose 'Ascend' or 'GPU'. Default 'Ascend'.

--device_id indicates the index of NPU or GPU. Default 0.

--mode is the running mode. 'GRAPH' indicates static graph mode. 'PYNATIVE' indicates dynamic graph mode.

Run Method 2: Run Jupyter Notebook

You can run the training and validation code line by line using the Chinese or English version of the Jupyter Notebook Chinese Version and English Version.

Results Display

Take 6 samples, and do 10 consecutive steps of prediction. Visualize the prediction as follows.

KNO Solves Burgers Equation

Performance

Parameter Ascend GPU
Hardware Ascend 910A, 32G;CPU: 2.6GHz, 192 cores NVIDIA V100 32G
MindSpore version 2.0.0 2.0.0
train loss 3e-5 3e-5
valid loss 3e-3 3e-3
speed 2s/epoch 7s/epoch

Contributor

gitee id:dyonghan

email: dyonghan@qq.com

1
https://gitee.com/mindspore/mindscience.git
git@gitee.com:mindspore/mindscience.git
mindspore
mindscience
mindscience
r0.5

搜索帮助