From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtpng1.m.smailru.net (smtpng1.m.smailru.net [94.100.181.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 5FDC846970E for ; Thu, 30 Jan 2020 18:49:37 +0300 (MSK) Date: Thu, 30 Jan 2020 18:49:46 +0300 From: Alexander Turenko Message-ID: <20200130154946.57toungzdlrfup5q@tkn_work_nb> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Subject: Re: [Tarantool-patches] [PATCH v7] gitlab-ci: implement packing into MCS S3 List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Alexander V. Tikhonov" Cc: Oleg Piskunov , tarantool-patches@dev.tarantool.org CCed Oleg. Oleg, please look at the commit message: whether it is understandable w/o much context? If not, please, work together with Alexander on it. I almost didn't look into the script (because I guess I'll no give much value here and because Igor already did). I have several small comments, but I don't against the patch at whole. WBR, Alexander Turenko. > The changes introduce new Gitlab-CI rules for creating packages on > branches with "-full-ci" suffix and their subsequent deployment to the > 'live' repository for master and release branches. Packages for tagged > commits are also delivered to the corresponding 'release' repository. > > The PackageCloud storage is replaced with the new self-hosted one Packagecloud rules are not removed (and this is what we planned to do at the moment), so the word 'replaced' looks confusing here. > (based on S3 object storage) where all old packages have been synced. > The new builds will be pushed only to S3 based repos. Benefits of the This is not so, we don't plan to disable packagecloud repositories right now. > introduced approach are the following: > * As far as all contents of self-hosted repos are fully controlled > theirs layout is the same as the ones provided by the corresponding > distro > * Repo metadata rebuild is excess considering the known repo layout > * Old packages are not pruned since they do not affect the repo > metadata rebuilding time > > For these purposes the standalone script for pushing DEB and RPM > packages to self-hosted repositories is introduced. The script > implements the following flow: > * creates new metafiles for the new packages > * copies new packages to S3 storage > * fetches relevant metafiles from the repo > * merges the new metadata with the fetched one > * pushes the updated metadata to S3 storage Is some blocking mechanism implemented? What will going on if two CI jobs will fetch metadata, update them locally and them push back? Is it depends on whether those CI jobs are run on the same machine or different ones? > > There are distro dependent parts in the script: > * For RPM packages it updates metadata separately per each repo > considering 'createrepo' util behaviour > * For DEB packages it updates metadata simultaniously for all repos > considering 'reprepro' util behaviour What 'repo' means here? We have 1.10 repo. But it contains centos/7, debian/jessie and other repos. I don't understand this paragraph, to be honest. Aside of that, we always update only one repository (say, 1.10 x centos/7) at the moment. 'Simultaneous update' looks even more confusing. > > Closes #3380 > > @TarantoolBot > Title: Update download instructions on the website > > Need to update download instructions on the website, due to the new > repository based on MCS S3. First, it is stub, not a request to the documentation team. What actions you are requested here? Second, since we decided to keep current repositories structure (separate 1.10/2.1/... repos and no version in a package name) no actions are required actually. > --- > > Github: https://github.com/tarantool/tarantool/tree/avtikhon/gh-3380-push-packages-s3-full-ci > Issue: https://github.com/tarantool/tarantool/issues/3380 > > v6: https://lists.tarantool.org/pipermail/tarantool-patches/2020-January/013763.html > v5: https://lists.tarantool.org/pipermail/tarantool-patches/2020-January/013636.html > v4: https://lists.tarantool.org/pipermail/tarantool-patches/2020-January/013568.html > v3: https://lists.tarantool.org/pipermail/tarantool-patches/2019-December/013060.html > v2: https://lists.tarantool.org/pipermail/tarantool-patches/2019-November/012352.html > v1: https://lists.tarantool.org/pipermail/tarantool-patches/2019-October/012021.html > diff --git a/.gitlab.mk b/.gitlab.mk > index 48a92e518..243f83f2c 100644 > --- a/.gitlab.mk > +++ b/.gitlab.mk > @@ -98,14 +98,38 @@ vms_test_%: > vms_shutdown: > VBoxManage controlvm ${VMS_NAME} poweroff > > -# ######################## > -# Build RPM / Deb packages > -# ######################## > +# ########################### > +# Sources tarballs & packages > +# ########################### But it is about packages, not tarballs. It seems you copied the comment from .travis.yml, but it is not appropriate here. > + > +# Push alpha and beta versions to x bucket (say, 2x), > +# stable to . bucket (say, 2.2). > +GIT_DESCRIBE=$(shell git describe HEAD) > +MAJOR_VERSION=$(word 1,$(subst ., ,$(GIT_DESCRIBE))) > +MINOR_VERSION=$(word 2,$(subst ., ,$(GIT_DESCRIBE))) > +BUCKET=$(MAJOR_VERSION)_$(MINOR_VERSION) > +ifeq ($(MINOR_VERSION),0) > +BUCKET=$(MAJOR_VERSION)x > +endif > +ifeq ($(MINOR_VERSION),1) > +BUCKET=$(MAJOR_VERSION)x > +endif Let's push to 2.1, we'll add 2x for compatibility on using redirects on download.tarantool.org. > > package: git_submodule_update > git clone https://github.com/packpack/packpack.git packpack > PACKPACK_EXTRA_DOCKER_RUN_PARAMS='--network=host' ./packpack/packpack > > +deploy: package > + for key in ${GPG_SECRET_KEY} ${GPG_PUBLIC_KEY} ; do \ > + echo $${key} | base64 -d | gpg --batch --import || true ; done I guess we need just secret key to signing. > + ./tools/update_repo.sh -o=${OS} -d=${DIST} \ > + -b="s3://tarantool_repo/live/${BUCKET}" build I would hide name of an S3 bucket under a CI variable. It is more about our infrastructure and it would be good to show less in the repo. > + for tag in $$(git tag) ; do \ > + git describe --long $${tag} ; \ > + done | grep "^$$(git describe --long)$$" >/dev/null && \ Simpler way: | git name-rev --name-only --tags --no-undefined HEAD 2>/dev/null See https://stackoverflow.com/a/11489642/1598057 > + ./tools/update_repo.sh -o=${OS} -d=${DIST} \ > + -b="s3://tarantool_repo/release/${BUCKET}" build > +function get_os_dists { > + os=$1 > + alldists= > + > + if [ "$os" == "ubuntu" ]; then > + alldists='trusty xenial bionic cosmic disco eoan' > + elif [ "$os" == "debian" ]; then > + alldists='jessie stretch buster' > + elif [ "$os" == "el" ]; then > + alldists='6 7 8' > + elif [ "$os" == "fedora" ]; then > + alldists='27 28 29 30 31' > + fi We have no Fedora 27 in CI. Should it be here? I don't mind, just found the tiny inconsistency. > + > + echo "$alldists" > +} > + > +function prepare_ws { > + # temporary lock the publication to the repository > + ws_suffix=$1 > + ws=${ws_prefix}_${ws_suffix} > + ws_lockfile=${ws}.lock > + if [ -f $ws_lockfile ]; then > + old_proc=$(cat $ws_lockfile) > + fi > + lockfile -l 60 $ws_lockfile > + chmod u+w $ws_lockfile && echo $$ >$ws_lockfile && chmod u-w $ws_lockfile > + if [ "$old_proc" != "" -a "$old_proc" != "0" ]; then > + kill -9 $old_proc >/dev/null || true > + fi So the current script can be killed by another instance of the script? This means that the lock does not work. > + > + # create temporary workspace with repository copy > + $rm_dir $ws > + $mk_dir $ws Am I understand right: we don't create a copy of remote repository?