Check the impact of the new patch on the performance of a certain set of operations:
asv continuous -f 1.05 src/main HEAD -b TimeGroupBy --launch-method=spawn
Check for presence of errors inside of benchmarks after changing them or writing new ones:
asv run --quick --show-stderr --python=same --launch-method=spawn
Run entire benchmark suite to get the current times:
asv run --launch-method=spawn
Check the range of commits for performance degradation:
asv run [start_hash]..[end_hash] --launch-method=spawn
asv publish
asv preview
For more consistent results, you may need to use the following parameters which description is in ASV docs:
-a sample_time=1
-a warmup_time=1
--launch-method=forkserver
is not working;Basic information on writing benchmarks is present in ASV documentation
Benchmarks from benchmarks/benchmarks.py
, benchmarks/scalability/scalability_benchmarks.py
or benchmarks/io/csv.py
could be used as a starting point.
Requirements:
MODIN_ASV_USE_IMPL
is selected.MODIN_TEST_DATASET_SIZE
.It should be remembered that the hash calculated from the benchmark source code is used to display the results. When changing the benchmark, the old results will no longer be displayed in the dashboard. In general, this is the correct behavior so as not to get a situation when incomparable numbers are displayed in the dashboard. But it should be noted that there could be changes in the source code when it is still correct to compare the "before" and "after" versions, for example, name of a variable changed, comment added, etc. In this case you must either run a new version of the benchmark for all the commits ever accounted for or manually change the hash in the corresponding result files.
Step 1: checking benchmarks for validity, runs in PRs CI.
During the test, the benchmarks are run once on small data.
The implementation can be found in test-asv-benchmarks
job of ci.yml
Step 2: running benchmarks with saving the results in modin-bench@master.
The launch takes place on internal server using specific TeamCity configuration.
The description of the server can be found in the "Benchmark list" tab,
on the left when you hover the mouse over the machine name.
This step starts as scheduled (now every half hour), subject to the presence of new commits in the Modin main
branch.
Command to run benchmarks: asv run HASHFILE:hashfile.txt --show-stderr --machine xeon-e5 --launch-method=spawn
.
In the file hashfile.txt
is the last modin commit hash.
Writing to a modin-bench@master
triggers 3 step of the pipeline.
Step 3: converting the results to html representation, which is saved in modin-bench@gh-pages
The implementation can be found in deploy-gh-pages
job of push.yml
Basic actions for step 2:
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。