<HTML><BODY><br><br><br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;">
Понедельник, 17 февраля 2020, 1:04 +03:00 от Alexander Turenko <alexander.turenko@tarantool.org>:<br><br><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY">In order to don't make anybody confused I'll share our agreement:<br><br>
- Proposal (stage 1):<br>
- Don't review / change workloads and harness.<br>
- Enable automatic runs of benchmarks on long-term branches (per-push).<br>
- Save those results to the existing database (one under<br>
bench.tarantool.org).<br>
- Resurrect bench.tarantool.org: it should show new results.<br><br>
After this we should review workload kinds and sizes, improve<br>
visualization, setup alerts and made other enhancement that will make<br>
performance tracking / measurements being the useful tool.<br><br>
Since we discussing the first stage now, there is nothing to review.<br><br>
There was the suggestion from me: move all things where we have no<br>
agreement to the separate repository (bench-run) to don't do many fixups<br>
within tarantool repository in the near future and split the<br>
responsibility (QA team is both producer and consumer of performance<br>
tracking results).<br><br>
We have no agreement on using docker in performance testing (I'm<br>
strongly against, but it is not in my responsibility). So any trails of<br>
docker should be within bench-run repository. So here I expect only<br>
./bench-run/prepare.sh and ./bench-run/sysbench.sh calls, nothing more.<br></div></div></div></div></blockquote><p>Ok, sure, I've moved all the actions into the bench-run, only make calls and<br>benchmarks scripts runs left at the Tarantool sources:<br></p><p>- make call to create/update the Docker images<br>- benchmarks scripts runners calls<br>- make call to cleanup the short-term Docker image</p><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br>
We can pass docker repository URI and credentials within environment<br>
variables (secret ones for credentials) and use it in bench-run. I don't<br>
see any problem to do it in this way.<br></div></div></div></div></blockquote>Fixed as suggested.<br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br>
Aside of this, I'm against of using gitlab-runner on performance<br>
machines, because I don't know how it works. But okay, maybe everything<br>
will be fine, however please monitor its behaviour.<br></div></div></div></div></blockquote>Right, it doesn't affect the performance results, also at the next performance<br>process development it can be monitored by Prometheus - still on discussion.<br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br>
My objections against using docker in a performance testing are below.<br>
Skip them: it is only to say 'I said this!' in the future.<br><br>
Several questions about the patch and bench-run are at end of the email<br>
(it is about stage 2, yep, but anyway).<br><br>
WBR, Alexander Turenko.<br><br>
----<br><br>
Docker virtualizes network and disk (both root and volumes). Any<br>
virtualization level adds complexity: requires more expertise and work<br>
to investigate and explain results, may affect results on its own and<br>
make them less predictable and stable. On the other hand, it does not<br>
give any gains for performance testing.<br><br>
One may say that it freezes userspace, but it may be easily achieved w/o<br>
docker: just don't change it. That's all.<br><br>
Okay, this topic is not so easy when the machine where a performance<br>
testing performed is not fully controlled: weird processes within an<br>
organization does not save us from somebody who will login and update<br>
something (strange, yep?).<br><br>
Docker will not save us from this situation: somebody may update docker<br>
itself, or kernel, or run something that will affect results that are in<br>
fly. The problem is in processes and it should be solved first.<br><br>
One may say that docker does not spoil performance results. Maybe. Maybe<br>
not. It is hard to say without deep investigation. While gains are so<br>
vague I would not pay my time to look at this direction.<br><br>
This is basically all, but I'll share several questions to show that my<br>
point 'adding of a virtualization level requires more expertise' have<br>
some ground downward.<br><br>
----<br><br>
Whether vm.dirty_ratio will work in the same way for dirty pages of a<br>
filesystem within a volume as for an underlying filesystem? Whether it<br>
depends on a certain underlying filesystem? Whether it'll use docker's<br>
memory size to calculate a dirty pages percent or system-wide one?<br><br>
Whether `sync + drop caches` within a container will affect disc buffers<br>
outside of the container (say, one that remains after a previous run<br>
within another container)?<br><br>
Whether a unix domain socket that is created within overlay filesystem<br>
will behave in the same way as on a real filesystem (in case we'll test<br>
iproto via unix socket)?<br><br>
Will fsync() flush data to a real disc or will be catched somewhere<br>
within docker? We had related regression [1].<br><br>
[1]: <a href="https://github.com/tarantool/tarantool/issues/3747" target="_blank">https://github.com/tarantool/tarantool/issues/3747</a><br><br>
----<br><br>
Black box testing sucks. We should deeply understand what we're testing,<br>
otherwise it will get 'some quality' which never will be good.<br><br>
Performance testing with docker is black box for me. When it is<br>
'tarantool + libc + some libs + kernel' I more or less understand (at<br>
least able to inspect) what is going on and I can, say, propose to add /<br>
remove / tune workloads to cover specific needs.<br><br>
I can dig into docker, of course, but there are so many things which<br>
deserves time more than this activity.<br><br>
----<br><br>
I looked here and there around the patch and bench-run and have several<br>
questions. Since we agreed to don't review anything around workloads<br>
now, it is just questions. Okay to ignore.<br><br>
I don't see any volume / mount parameters. Aren't this means that WAL<br>
writes will going to an overlay fs? I guess it may be far from a real<br>
disc and may have a separate level of caching.<br></div></div></div></div></blockquote><p>Gitlab-runner uses Docker image for running the jobs, it's configuration at the<br>very high level is:<br></p><p>tls_verify = false<br>memory = "60g"<br> memory_swap = "60g"<br> cpuset_cpus = "6,7,8,9,10,11"<br> privileged = true<br> disable_entrypoint_overwrite = false<br> oom_kill_disable = false<br> disable_cache = false<br> volumes = ["/mnt/gitlab_docker_tmpfs_perf:/builds", "/cache"]<br> network_mode = "host"<br> shm_size = 0</p><p>Also for different benchmarks like we have linkbench where the disk performance<br>is need to be checked it has special gitlab-runner configuration - it really uses disk<br>space and doesn't have swap space, because memory == memory_swap:</p><p>memory = "3g"<br> memory_swap = "3g"<br> privileged = true<br> disable_entrypoint_overwrite = false<br> oom_kill_disable = false<br> disable_cache = false<br> volumes = ["/test_ssd/gitlab:/builds", "/cache"]<br> network_mode = "host"<br> shm_size = 0<br><br>Also the docker images can be checked by 'docker inspect' for the fs type, by<br>default it is 'overlay2', anyway it can be changed after needed discussions.</p><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br>
AFAIS, the current way to use docker don't even try to freeze userspace:<br>
it uses 'ubuntu:18.04' tag, which is updated from time to time, not,<br>
say, 'ubuntu:bionic-20200112'. It also performs 'apt-get update' inside<br>
and so userspace will be changed for each rebuilt of the image. We<br>
unable to change something inside the image and don't update everything.<br>
This way we don't actually control userspace updates.</div></div></div></div></blockquote>Right, I've removed the 'upgrade' extra call and left only 'apt-get update', which<br>in real just updates the list of the repositories with needed packages to install.<br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br><br>
BTW, why Ubuntu is used while all production environments (where<br>
performance matters) are on RHEL / CentOS 7?</div></div></div></div></blockquote>The Dockerfiles that installs the benchmarks that were used from benchmarks<br>run repository used Ubuntu 18.04, so it was left till the decision that we need to<br>update it to CentOS 7 or similar.<br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15818906401089662990_BODY"><br><br>
Why dirty cache is not cleaned (`sync`) before flushing clean cache to<br>
disc (`echo 3 > /proc/sys/vm/drop_caches`)?<br></div></div></div></div></blockquote>
Ok, sure, I've checked it and set in bench-run repository scripts.<br>
<br>-- <br>Alexander Tikhonov<br></BODY></HTML>