ENGLISH | 简体中文
Shallow-Water Equation (SWE) are the canonical test bed for developing noval algorithms for the dynamical core of weather and climate prediction models. Bihlo proposes a Physics-Informed Neural Networks (PINNs) to solve the SWE on the rotating sphere. One drawback of PINNs is that the number of training collocation points increases with the larger problem domain. To overcome the large time domain in the test cases, the author proposes to split the large time domain into several non-overlapping subintervals and solve the SWE in each subinterval consecutively, by training a new neural networks for each interval.
In this repository, the scenario of cosine bell advection around the sphere is illustrated.
paper: Bihlo, Alex, and Roman O. Popovych. Physics-Informed Neural Networks for the Shallow-Water Equations on the Sphere. arXiv.org, February 12, 2022.
The dataset used for training is randomly generated by function collocation_points
in ./src/process.py
.
User should give the range of data, defined by the boundaries of days, lambda and theta. By default, the ranges are:
The size of dataset depends on the number of samples. which are controlled by n_pde
and n_iv
in config.yaml
,
the default values are 100000 and 10000, respectively.
The pretrained checkpoint files will be downloaded automatically at the first launch. If you need to download the checkpoint files manually, please visit this link.
After installing MindSpore via the official website, you can start training and evaluation as follows:
Default:
python train.py
Full command:
python train.py \
--layers 4 20 20 20 20 1 \
--load_ckpt false \
--save_ckpt_path ./checkpoints \
--load_ckpt_path ./checkpoints \
--save_fig true \
--figures_path ./figures \
--log_path ./logs \
--lr 1e-3 \
--epochs 18000 \
--n_pde 100000 \
--n_iv 10000 \
--u 1 \
--h 1000 \
--days 12 \
--download_data pinns_swe \
--force_download false \
--amp_level O3 \
--device_id 0 \
--mode 0
├── pinns_swe
│ ├── checkpoints # checkpoints files
│ ├── data # data files
│ ├── figures # plot figures
│ ├── logs # log files
│ ├── src # source codes
│ │ ├── network.py # network architecture
│ │ ├── plot.py # plotting results
│ │ ├── process.py # data process
│ │ ├── problem.py # training process definition
│ │ └── linear_advection_sphere.py # loss definition according to advection equation
│ ├── config.yaml # hyper-parameters configuration
│ ├── README.md # English model descriptions
│ ├── README_CN.md # Chinese model description
│ ├── train.py # python training script
│ └── eval.py # python evaluation script
Important parameters in train.py are as follows:
parameter | description | default value |
---|---|---|
layers | neural network layer shape | 4 20 20 20 20 1 |
load_ckpt | whether load checkpoint or not | false |
save_ckpt_path | checkpoint saving path | ./checkpoints |
load_ckpt_path | checkpoint loading path | ./checkpoints |
save_fig | whether save and plot figures or not | true |
figures_path | figures saving path | ./figures |
log_path | log saving path | ./logs |
lr | learning rate | 1e-3 |
epochs | number of epochs | 18000 |
n_pde | number of data points | 100000 |
n_iv | number of initial points | 10000 |
u | scale of problem | 1 |
h | scale of problem | 1000 |
days | total number of days | 12 |
download_data | necessary dataset and/or checkpoints | pinns_swe |
force_download | whether download the dataset or not by force | false |
amp_level | MindSpore auto mixed precision level | O3 |
device_id | device id to set | None |
mode | MindSpore Graph mode(0) or Pynative mode(1) | 0 |
running on GPU/Ascend
python train.py
The loss values during training will be printed in the console, which can also be inspected after training in log file.
# grep "loss" log
PDE loss, IC loss in 0th epoch: 0.04082385, 0.13227281, interval 28.69731688, total: 28.69731688
PDE loss, IC loss in 1th epoch: 0.02216472, 0.05938588, interval 3.24713469, total: 31.94445157
PDE loss, IC loss in 2th epoch: 0.01156821, 0.02318317, interval 3.31807733, total: 35.26252890
PDE loss, IC loss in 3th epoch: 0.00694417, 0.00913251, interval 3.22263527, total: 38.48516417
PDE loss, IC loss in 4th epoch: 0.00577628, 0.00795174, interval 3.32371068, total: 41.80887485
PDE loss, IC loss in 5th epoch: 0.00556142, 0.01145195, interval 3.30852318, total: 45.11739802
PDE loss, IC loss in 6th epoch: 0.00492313, 0.01358479, interval 3.31264329, total: 48.43004131
PDE loss, IC loss in 7th epoch: 0.00375959, 0.01274938, interval 3.32251096, total: 51.75255227
...
After training, you can still review the training process through the log file saved in log_path
, ./logs
directory
by default.
The model checkpoint will be saved in save_ckpt_path
, ./checkpoint
directory by default.
Before running the command below, please check the checkpoint loading path load_ckpt_path
specified
in config.yaml
for evaluation.
running on GPU/Ascend
python eval.py
You can view the process and results through the log_path
, ./logs
by default.
The result pictures are saved in figures_path
, ./figures
by default.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。