Четверг, 5 декабря 2019, 15:23 +03:00 от Igor Munkin <imun@tarantool.org>:Ok, sureSasha,
Thanks for the patch. It looks much better now (except the fact I would
prefer to see it in Perl rather in Bash), but let's polish it a bit.
Consider my comments below.
Moreover, this patch is a complex one, so please split your further work
into several blocks to be committed separately.
Ok
On 12.11.19, Alexander V. Tikhonov wrote:
> Added ability to store packages additionaly at MCS S3.
> The target idea was to add the new way of packages creation at MCS S3,
> which temporary duplicates the packaging at PackageCloud by Packpack
> tool. Also it was needed to pack the binaries in the native way of the
> each packing OS style. It was created standalone script for adding the
> packages binaries/sources to MCS S3 at DEB either RPM repositories:
> 'tools/add_pack_s3_repo.sh'
> Common parts of the script are:
> - create new meta files for the new binaries
> - copy new binaries to MCS S3
> - get previous metafiles from MCS S3 and merge the new meta data for
> the new binaries
> - update the meta files at MCS S3
> Different parts:
> - DEB script part based on reprepro external tool, also it works separately
> only on OS versions level - it means that meta data it updates for all
> Distritbutions together.
> - RPM script part based on createrepo external tool, also it works
> separately for OS/Release level - it means that meta data it updates
> for all Releases separately.
>
> Closes #3380
> ---
>
> Github: https://github.com/tarantool/tarantool/tree/avtikhon/gh-3380-push-packages-s3-full-ci
> Issue: https://github.com/tarantool/tarantool/issues/3380
Please add the changelog per version in further patchesets.
"centos" alias completely removed.
>
> .gitlab-ci.yml | 5 +-
> .gitlab.mk | 20 +-
> .travis.mk | 41 ++--
> tools/add_pack_s3_repo.sh | 493 ++++++++++++++++++++++++++++++++++++++
> 4 files changed, 533 insertions(+), 26 deletions(-)
> create mode 100755 tools/add_pack_s3_repo.sh
>
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index cf13c382e..4dcaf9cd3 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -231,7 +231,10 @@ debian_10:
> DIST: 'buster'
>
> static_build:
> - <<: *deploy_test_definition
> + <<: *release_only_definition
> + stage: test
> + tags:
> + - deploy_test
> variables:
> RUN_TESTS: 'ON'
> script:
> diff --git a/.gitlab.mk b/.gitlab.mk
> index 48a92e518..64664c64f 100644
> --- a/.gitlab.mk
> +++ b/.gitlab.mk
> @@ -98,13 +98,27 @@ vms_test_%:
> vms_shutdown:
> VBoxManage controlvm ${VMS_NAME} poweroff
>
> -# ########################
> -# Build RPM / Deb packages
> -# ########################
> +# ###########################
> +# Sources tarballs & packages
> +# ###########################
> +
> +# Push alpha and beta versions to <major>x bucket (say, 2x),
> +# stable to <major>.<minor> bucket (say, 2.2).
> +GIT_DESCRIBE=$(shell git describe HEAD)
> +MAJOR_VERSION=$(word 1,$(subst ., ,$(GIT_DESCRIBE)))
> +MINOR_VERSION=$(word 2,$(subst ., ,$(GIT_DESCRIBE)))
> +BUCKET=$(MAJOR_VERSION)_$(MINOR_VERSION)
> +ifeq ($(MINOR_VERSION),0)
> +BUCKET=$(MAJOR_VERSION)x
> +endif
> +ifeq ($(MINOR_VERSION),1)
> +BUCKET=$(MAJOR_VERSION)x
> +endif
>
> package: git_submodule_update
> git clone https://github.com/packpack/packpack.git packpack
> PACKPACK_EXTRA_DOCKER_RUN_PARAMS='--network=host' ./packpack/packpack
> + ./tools/add_pack_s3_repo.sh -b=${BUCKET} -o=${OS} -d=${DIST} build
>
> # ############
> # Static build
> diff --git a/.travis.mk b/.travis.mk
> index 42969ff56..a85f71ced 100644
> --- a/.travis.mk
> +++ b/.travis.mk
> @@ -8,10 +8,6 @@ MAX_FILES?=65534
>
> all: package
>
> -package:
> - git clone https://github.com/packpack/packpack.git packpack
> - ./packpack/packpack
> -
> test: test_$(TRAVIS_OS_NAME)
>
> # Redirect some targets via docker
> @@ -176,34 +172,35 @@ test_freebsd_no_deps: build_freebsd
>
> test_freebsd: deps_freebsd test_freebsd_no_deps
>
> -####################
> -# Sources tarballs #
> -####################
> -
> -source:
> - git clone https://github.com/packpack/packpack.git packpack
> - TARBALL_COMPRESSOR=gz packpack/packpack tarball
> +###############################
> +# Sources tarballs & packages #
> +###############################
>
> # Push alpha and beta versions to <major>x bucket (say, 2x),
> # stable to <major>.<minor> bucket (say, 2.2).
> -ifeq ($(TRAVIS_BRANCH),master)
> GIT_DESCRIBE=$(shell git describe HEAD)
> MAJOR_VERSION=$(word 1,$(subst ., ,$(GIT_DESCRIBE)))
> MINOR_VERSION=$(word 2,$(subst ., ,$(GIT_DESCRIBE)))
> -else
> -MAJOR_VERSION=$(word 1,$(subst ., ,$(TRAVIS_BRANCH)))
> -MINOR_VERSION=$(word 2,$(subst ., ,$(TRAVIS_BRANCH)))
> -endif
> -BUCKET=tarantool.$(MAJOR_VERSION).$(MINOR_VERSION).src
> +BUCKET=$(MAJOR_VERSION)_$(MINOR_VERSION)
> ifeq ($(MINOR_VERSION),0)
> -BUCKET=tarantool.$(MAJOR_VERSION)x.src
> +BUCKET=$(MAJOR_VERSION)x
> endif
> ifeq ($(MINOR_VERSION),1)
> -BUCKET=tarantool.$(MAJOR_VERSION)x.src
> +BUCKET=$(MAJOR_VERSION)x
> endif
>
> +packpack_prepare:
> + git clone https://github.com/packpack/packpack.git packpack
> +
> +package: packpack_prepare
> + ./packpack/packpack
> +
> +source: packpack_prepare
> + TARBALL_COMPRESSOR=gz packpack/packpack tarball
> +
> source_deploy:
> pip install awscli --user
> - aws --endpoint-url "${AWS_S3_ENDPOINT_URL}" s3 \
> - cp build/*.tar.gz "s3://${BUCKET}/" \
> - --acl public-read
> + for tarball in `ls build/*.tar.gz 2>/dev/null` ; \
> + aws --endpoint-url "${AWS_S3_ENDPOINT_URL}" s3 \
> + cp ${tarball} "s3://tarantool_repo/${BUCKET}/sources/" \
> + --acl public-read
> diff --git a/tools/add_pack_s3_repo.sh b/tools/add_pack_s3_repo.sh
> new file mode 100755
> index 000000000..2316b9015
> --- /dev/null
> +++ b/tools/add_pack_s3_repo.sh
> @@ -0,0 +1,493 @@
> +#!/bin/bash
> +set -e
> +
> +rm_file='rm -f'
> +rm_dir='rm -rf'
> +mk_dir='mkdir -p'
> +
> +alloss='ubuntu debian centos el fedora'
General comments for the script:
* as discussed offline with you and Sasha Tu., "centos" alias is excess
and it can be removed everywhere within this script.
Changed.
* consider using $(...) instead of `...` to improve readability.
Fixed.
* typo: adjust the tab misusage for indentation.
Ok, done, as much as possible not to make the code harder to understand.
> +
> +get_os_dists()
> +{
> + os=$1
> + alldists=
> +
> + if [ "$os" == "ubuntu" ]; then
> + alldists='trusty xenial cosmic disco bionic eoan'
> + elif [ "$os" == "debian" ]; then
> + alldists='jessie stretch buster'
> + elif [ "$os" == "centos" -o "$os" == "el" ]; then
> + alldists='6 7 8'
> + elif [ "$os" == "fedora" ]; then
> + alldists='27 28 29 30 31'
> + fi
> +
> + echo "$alldists"
> +}
> +
> +ws_prefix=/tmp/tarantool_repo_s3
Please move the prior line to the beginning of the script to organize a
single place with the "constants" and "defaults".
Corrected, help message improved.
> +create_lockfile()
> +{
> + lockfile -l 1000 $1
> +}
> +
> +usage()
> +{
> + cat <<EOF
> +Usage: $0 -b <S3 bucket> -o <OS name> -d <OS distribuition> [-p <product>] <path>
> +Options:
> + -b|--bucket
> + MCS S3 bucket which will be used for storing the packages.
> + Name of the package one of the appropriate Tarantool branch:
> + master: 2x
> + 2.3: 2_3
> + 2.2: 2_2
> + 1.10: 1_10
> + -o|--os
> + OS to be checked, one of the list (NOTE: centos == el):
> + $alloss
> + -d|--distribution
> + Distribution appropriate to the given OS:
> +EOF
> + for os in $alloss ; do
> + echo " $os: <"`get_os_dists $os`">"
> + done
> + cat <<EOF
> + -p|--product
> + Product name to be packed with, default name is 'tarantool'
> + -h|--help
> + Usage help message
Minor: arguments description usually goes prior to options description.
Ok, changed.
> + <path>
> + Path points to the directory with deb/prm packages to be used.
> + Script can be used in one of 2 modes:
> + - path with binaries packages for a single distribution
> + - path with 'pool' directory with APT repository (only: debian|ubuntu)
> +EOF
> +}
> +
> +for i in "$@"
> +do
> +case $i in
> + -b=*|--bucket=*)
> + branch="${i#*=}"
> + if [ "$branch" != "2x" -a "$branch" != "2_3" -a "$branch" != "2_2" -a "$branch" != "1_10" ]; then
grep seems to be more convenient here
| echo "$branch" | grep -qvP '^(1_10|2(x|_[2-4]))$'
Updated help message routine to show that the DIST is not the mandatory option.
> + echo "ERROR: bucket '$branch' is not supported"
> + usage
> + exit 1
> + fi
> + shift # past argument=value
> + ;;
> + -o=*|--os=*)
> + os="${i#*=}"
> + if [ "$os" == "el" ]; then
> + os=centos
> + fi
> + if ! echo $alloss | grep -F -q -w $os ; then
> + echo "ERROR: OS '$os' is not supported"
> + usage
> + exit 1
> + fi
> + shift # past argument=value
> + ;;
> + -d=*|--distribution=*)
> + DIST="${i#*=}"
> + shift # past argument=value
I guess DIST can be validated right here like other variables above
since it is a mandatory one. I see no need in get_os_dists subroutine,
you can just initialize alldists at the beginning (like, alloss) and use
it instead of "qx"-ing the corresponding function.
Renamed distribution variables
Minor: IMHO the naming should be consistent (DIST but os).
Fixed, just the obvious code line.
> + ;;
> + -p=*|--product=*)
> + product="${i#*=}"
> + shift # past argument=value
> + ;;
> + -h|--help)
> + usage
> + exit 0
> + ;;
> + *)
> + repo="${i#*=}"
> + pushd $repo >/dev/null ; repo=$PWD ; popd >/dev/null
> + shift # past argument=value
> + ;;
> +esac
> +done
> +
> +# check that all needed options were set
> +if [ "$branch" == "" ]; then
> + echo "ERROR: need to set -b|--bucket bucket option, check usage"
> + usage
> + exit 1
> +fi
> +if [ "$os" == "" ]; then
> + echo "ERROR: need to set -o|--os OS name option, check usage"
> + usage
> + exit 1
> +fi
> +alldists=`get_os_dists $os`
> +if [ -n "$DIST" ] && ! echo $alldists | grep -F -q -w $DIST ; then
> + echo "ERROR: set distribution at options '$DIST' not found at supported list '$alldists'"
> + usage
> + exit 1
> +fi
> +
> +# set the path with binaries
> +product=$product
I don't get this line, it seems to be an excess one, but I'm totally not
a bash master.
Moved product and repo before the options parser
> +if [ "$product" == "" ]; then
> + product=tarantool
> +fi
Consider moving defaults initialization prior to arguments parsing to
avoid branching like the one above.
Moved product and repo before the options parser
> +proddir=`echo $product | head -c 1`
> +
> +# set the path with binaries
> +if [ "$repo" == "" ]; then
> + repo=.
> +fi
Please consider the comment above related to defaults initialization.
Ok, corrected.
> +
> +aws='aws --endpoint-url https://hb.bizmrg.com'
Minor: I guess you can append s3 literal to the aws variable and use
| $aws cp --acl public-read $deb $s3/$locdeb
instead of
| $aws s3 cp --acl public-read $deb $s3/$locdeb
Right, this path better to mention in Usage routine instead of default setup.
> +s3="s3://tarantool_repo/$branch/$os"
> +
> +# The 'pack_deb' function especialy created for DEB packages. It works
> +# with DEB packing OS like Ubuntu, Debian. It is based on globaly known
> +# tool 'reprepro' from:
> +# https://wiki.debian.org/DebianRepository/SetupWithReprepro
> +# This tool works with complete number of distributions of the given OS.
> +# Result of the routine is the debian package for APT repository with
> +# file structure equal to the Debian/Ubuntu:
> +# http://ftp.am.debian.org/debian/pool/main/t/tarantool/
> +# http://ftp.am.debian.org/ubuntu/pool/main/t/
> +function pack_deb {
> + # we need to push packages into 'main' repository only
> + component=main
> +
> + # debian has special directory 'pool' for packages
> + debdir=pool
> +
> + # get packages from pointed location either mirror path
> + if [ "$repo" == "" ] ; then
> + repo=/var/spool/apt-mirror/mirror/packagecloud.io/tarantool/$branch/$os
> + fi
Maybe I misread the logic above, but the condition above seems to be
always false.
Removed extra message and set usage call
> + if [ ! -d $repo/$debdir ] && ( [ "$DIST" == "" ] || ! ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ) ; then
> + echo "ERROR: Current '$repo' path doesn't have any of the following:"
> + echo "Usage with set distribuition with option '-d' and packages: $0 [path with *.deb *.dsc *.tar.*z files]"
> + echo "Usage with repositories: $0 [path to repository with '$debdir' subdirectory]"
Heredoc seems to be more convenient here.
Ok, created prepare_ws routine
> + exit 1
> + fi
> +
As discussed offline let's try to move the code below to a generic
function with a OS-specific handler (e.g. <os>_prep) to be executed
within.
Corrected all long lines
> + # temporary lock the publication to the repository
> + ws=${ws_prefix}_${branch}_${os}
> + ws_lockfile=${ws}.lock
> + create_lockfile $ws_lockfile
> +
> + # create temporary workspace with repository copy
> + $rm_dir $ws
> + $mk_dir $ws
> +
> + # script works in one of 2 modes:
> + # - path with binaries packages for a single distribution
> + # - path with 'pool' directory with APT repository
> + if [ "$DIST" != "" ] && ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ; then
Minor: consider splitting the condition above into two lines.
Right - corrected.
> + # copy single distribution with binaries packages
> + repopath=$ws/pool/${DIST}/main/$proddir/$product
I guess you missed to s/main/$component/ here.
Actually there are only absolute paths in use, anyway I don't see any use to save the previous directories.
> + $mk_dir ${repopath}
> + cp $repo/*.deb $repo/*.dsc $repo/*.tar.*z $repopath/.
> + elif [ -d $repo/$debdir ]; then
> + # copy 'pool' directory with APT repository
> + cp -rf $repo/$debdir $ws/.
> + else
> + echo "ERROR: neither distribution option '-d' with files $repo/*.deb $repo/*.dsc $repo/*.tar.*z set nor '$repo/$debdir' path found"
> + usage
> + $rm_file $wslock
> + exit 1
> + fi
> + cd $ws
Minor: I can't find the corresponding cd to the previous directory, so
it can break a post processing.
Right - corrected.
> +
> + # create the configuration file for 'reprepro' tool
> + confpath=$ws/conf
> + $rm_dir $confpath
> + $mk_dir $confpath
> +
> + for dist in $alldists ; do
> + cat <<EOF >>$confpath/distributions
> +Origin: Tarantool
> +Label: tarantool.org
> +Suite: stable
> +Codename: $dist
> +Architectures: amd64 source
> +Components: main
I guess you missed to s/main/$component/ here.
Corrected.
> +Description: Unofficial Ubuntu Packages maintained by Tarantool
> +SignWith: 91B625E5
> +DebIndices: Packages Release . .gz .bz2
> +UDebIndices: Packages . .gz .bz2
> +DscIndices: Sources Release .gz .bz2
> +
> +EOF
> +done
Typo: adjust the indentation.
Fixed 2 places.
> +
> + # create standalone repository with separate components
> + for dist in $alldists ; do
> + echo =================== DISTRIBUTION: $dist =========================
> + updated_deb=0
> + updated_dsc=0
> +
> + # 1(binaries). use reprepro tool to generate Packages file
> + for deb in $ws/$debdir/$dist/$component/*/*/*.deb ; do
> + [ -f $deb ] || continue
> + locdeb=`echo $deb | sed "s#^$ws\/##g"`
Minor: escaping is excess here, since you use a different char for
separator. Please consider this remark for all script.
Corrected.
> + echo "DEB: $deb"
> + # register DEB file to Packages file
> + reprepro -Vb . includedeb $dist $deb
> + # reprepro copied DEB file to local component which is not needed
> + $rm_dir $debdir/$component
> + # to have all packages avoid reprepro set DEB file to its own registry
> + $rm_dir db
> + # copy Packages file to avoid of removing by the new DEB version
> + for packages in dists/$dist/$component/binary-*/Packages ; do
> + if [ ! -f $packages.saved ] ; then
> + # get the latest Packages file from S3
> + $aws s3 ls "$s3/$packages" 2>/dev/null && \
> + $aws s3 cp --acl public-read \
> + "$s3/$packages" $packages.saved || \
Minor: IMHO, splitting the previous command into two lines makes it less
readable. However, feel free to ignore this note.
Created standalone routine for metadata Packages and Sources files update
> + touch $packages.saved
> + fi
> + # check if the DEB file already exists in Packages from S3
> + if grep "^`grep "^SHA256: " $packages`$" $packages.saved ; then
> + echo "WARNING: DEB file already registered in S3!"
> + continue
> + fi
> + # store the new DEB entry
> + cat $packages >>$packages.saved
> + # save the registered DEB file to S3
> + $aws s3 cp --acl public-read $deb $s3/$locdeb
> + updated_deb=1
> + done
> + done
> +
I see this part is quite similar to the corresponding one above. Please
consider to move it to a separate generic function with a source/binary
specific handler to be executed within.
Corrected.
> + # 1(sources). use reprepro tool to generate Sources file
> + for dsc in $ws/$debdir/$dist/$component/*/*/*.dsc ; do
> + [ -f $dsc ] || continue
> + locdsc=`echo $dsc | sed "s#^$ws\/##g"`
> + echo "DSC: $dsc"
> + # register DSC file to Sources file
> + reprepro -Vb . includedsc $dist $dsc
> + # reprepro copied DSC file to component which is not needed
> + $rm_dir $debdir/$component
> + # to have all sources avoid reprepro set DSC file to its own registry
> + $rm_dir db
> + # copy Sources file to avoid of removing by the new DSC version
> + sources=dists/$dist/$component/source/Sources
> + if [ ! -f $sources.saved ] ; then
> + # get the latest Sources file from S3
> + $aws s3 ls "$s3/$sources" && \
> + $aws s3 cp --acl public-read "$s3/$sources" $sources.saved || \
> + touch $sources.saved
> + fi
> + # WORKAROUND: unknown why, but reprepro doesn`t save the Sources file
> + gunzip -c $sources.gz >$sources
> + # check if the DSC file already exists in Sources from S3
> + hash=`grep '^Checksums-Sha256:' -A3 $sources | \
> + tail -n 1 | awk '{print $1}'`
> + if grep " $hash .*\.dsc$" $sources.saved ; then
> + echo "WARNING: DSC file already registered in S3!"
> + continue
> + fi
> + # store the new DSC entry
> + cat $sources >>$sources.saved
> + # save the registered DSC file to S3
> + $aws s3 cp --acl public-read $dsc $s3/$locdsc
> + tarxz=`echo $locdsc | sed 's#\.dsc$#.debian.tar.xz#g'`
> + $aws s3 cp --acl public-read $ws/$tarxz "$s3/$tarxz"
> + orig=`echo $locdsc | sed 's#-1\.dsc$#.orig.tar.xz#g'`
> + $aws s3 cp --acl public-read $ws/$orig "$s3/$orig"
> + updated_dsc=1
> + done
> +
> + # check if any DEB/DSC files were newly registered
> + [ "$updated_deb" == "0" -a "$updated_dsc" == "0" ] && \
> + continue || echo "Updating dists"
> +
> + # finalize the Packages file
> + for packages in dists/$dist/$component/binary-*/Packages ; do
> + mv $packages.saved $packages
> + done
> +
> + # 2(binaries). update Packages file archives
> + for packpath in dists/$dist/$component/binary-* ; do
> + pushd $packpath
> + sed "s#Filename: $debdir/$component/#Filename: $debdir/$dist/$component/#g" -i Packages
> + bzip2 -c Packages >Packages.bz2
> + gzip -c Packages >Packages.gz
> + popd
> + done
> +
> + # 2(sources). update Sources file archives
> + pushd dists/$dist/$component/source
> + sed "s#Directory: $debdir/$component/#Directory: $debdir/$dist/$component/#g" -i Sources
> + bzip2 -c Sources >Sources.bz2
> + gzip -c Sources >Sources.gz
> + popd
> +
> + # 3. update checksums entries of the Packages* files in *Release files
> + # NOTE: it is stable structure of the *Release files when the checksum
> + # entries in it in the following way:
> + # MD5Sum:
> + # <checksum> <size> <file orig>
> + # <checksum> <size> <file debian>
> + # SHA1:
> + # <checksum> <size> <file orig>
> + # <checksum> <size> <file debian>
> + # SHA256:
> + # <checksum> <size> <file orig>
> + # <checksum> <size> <file debian>
> + # The script bellow puts 'md5' value at the 1st found file entry,
> + # 'sha1' - at the 2nd and 'sha256' at the 3rd
> + pushd dists/$dist
> + for file in `grep " $component/" Release | awk '{print $3}' | sort -u` ; do
> + sz=`stat -c "%s" $file`
> + md5=`md5sum $file | awk '{print $1}'`
> + sha1=`sha1sum $file | awk '{print $1}'`
> + sha256=`sha256sum $file | awk '{print $1}'`
> + awk 'BEGIN{c = 0} ; {
> + if ($3 == p) {
> + c = c + 1
> + if (c == 1) {print " " md " " s " " p}
> + if (c == 2) {print " " sh1 " " s " " p}
> + if (c == 3) {print " " sh2 " " s " " p}
> + } else {print $0}
> + }' p="$file" s="$sz" md="$md5" sh1="$sha1" sh2="$sha256" \
> + Release >Release.new
Typo: adjust the indentation.
Actually there are only absolute paths in use, anyway I don't see any use to save the previous directories.
> + mv Release.new Release
> + done
> + # resign the selfsigned InRelease file
> + $rm_file InRelease
> + gpg --clearsign -o InRelease Release
> + # resign the Release file
> + $rm_file Release.gpg
> + gpg -abs -o Release.gpg Release
> + popd
> +
> + # 4. sync the latest distribution path changes to S3
> + $aws s3 sync --acl public-read dists/$dist "$s3/dists/$dist"
> + done
> +
> + # unlock the publishing
> + $rm_file $ws_lockfile
> +}
> +
> +# The 'pack_rpm' function especialy created for RPM packages. It works
> +# with RPM packing OS like Centos, Fedora. It is based on globaly known
> +# tool 'createrepo' from:
> +# https://linux.die.net/man/8/createrepo
> +# This tool works with single distribution of the given OS.
> +# Result of the routine is the rpm package for YUM repository with
> +# file structure equal to the Centos/Fedora:
> +# http://mirror.centos.org/centos/7/os/x86_64/Packages/
> +# http://mirrors.kernel.org/fedora/releases/30/Everything/x86_64/os/Packages/t/
> +function pack_rpm {
> + if ! ls $repo/*.rpm >/dev/null 2>&1 ; then
> + echo "ERROR: Current '$repo' has:"
> + ls -al $repo
> + echo "Usage: $0 [path with *.rpm files]"
> + exit 1
> + fi
> +
> + # temporary lock the publication to the repository
> + ws=${prefix_lockfile}_${branch}_${os}_${DIST}
> + ws_lockfile=${ws}.lock
> + create_lockfile $ws_lockfile
> +
> + # create temporary workspace with packages copies
> + $rm_dir $ws
> + $mk_dir $ws
> + cp $repo/*.rpm $ws/.
> + cd $ws
> +
> + # set the paths
> + if [ "$os" == "centos" ]; then
> + repopath=$DIST/os/x86_64
> + rpmpath=Packages
> + elif [ "$os" == "fedora" ]; then
> + repopath=releases/$DIST/Everything/x86_64/os
> + rpmpath=Packages/$proddir
> + fi
> + packpath=$repopath/$rpmpath
> +
> + # prepare local repository with packages
> + $mk_dir $packpath
> + mv *.rpm $packpath/.
> + cd $repopath
Minor: I can't find the corresponding cd to the previous directory, so
it can break a post processing.
Corrected.
> +
> + # copy the current metadata files from S3
> + mkdir repodata.base
> + for file in `$aws s3 ls $s3/$repopath/repodata/ | awk '{print $NF}'` ; do
> + $aws s3 ls $s3/$repopath/repodata/$file || continue
> + $aws s3 cp $s3/$repopath/repodata/$file repodata.base/$file
> + done
> +
> + # create the new repository metadata files
> + createrepo --no-database --update --workers=2 --compress-type=gz --simple-md-filenames .
> + mv repodata repodata.adding
> +
> + # merge metadata files
> + mkdir repodata
> + head -n 2 repodata.adding/repomd.xml >repodata/repomd.xml
> + for file in filelists.xml other.xml primary.xml ; do
> + # 1. take the 1st line only - to skip the line with number of packages which is not needed
> + zcat repodata.adding/$file.gz | head -n 1 >repodata/$file
> + # 2. take 2nd line with metadata tag and update the packages number in it
> + packsold=0
> + if [ -f repodata.base/$file.gz ] ; then
> + packsold=`zcat repodata.base/$file.gz | head -n 2 | tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g'`
Typo: adjust the indentation.
Ok, corrected.
> + fi
> + packsnew=`zcat repodata.adding/$file.gz | head -n 2 | tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g'`
> + packs=$(($packsold+$packsnew))
> + zcat repodata.adding/$file.gz | head -n 2 | tail -n 1 | sed "s#packages=\".*\"#packages=\"$packs\"#g" >>repodata/$file
Minor: consider splitting huge pipelines into a several lines separating
it by a pipe char.
> + # 3. take only 'package' tags from new file
> + zcat repodata.adding/$file.gz | tail -n +3 | head -n -1 >>repodata/$file
> + # 4. take only 'package' tags from old file if exists
> + if [ -f repodata.base/$file.gz ] ; then
> + zcat repodata.base/$file.gz | tail -n +3 | head -n -1 >>repodata/$file
> + fi
> + # 5. take the last closing line with metadata tag
> + zcat repodata.adding/$file.gz | tail -n 1 >>repodata/$file
> +
> + # get the new data
> + chsnew=`sha256sum repodata/$file | awk '{print $1}'`
> + sz=`stat --printf="%s" repodata/$file`
> + gzip repodata/$file
> + chsgznew=`sha256sum repodata/$file.gz | awk '{print $1}'`
> + szgz=`stat --printf="%s" repodata/$file.gz`
> + timestamp=`date +%s -r repodata/$file.gz`
> +
> + # add info to repomd.xml file
> + name=`echo $file | sed 's#\.xml$##g'`
> + cat <<EOF >>repodata/repomd.xml
> +<data type="$name">
> + <checksum type="sha256">$chsgznew</checksum>
> + <open-checksum type="sha256">$chsnew</open-checksum>
> + <location href="repodata/$file.gz"/>
> + <timestamp>$timestamp</timestamp>
> + <size>$szgz</size>
> + <open-size>$sz</open-size>
> +</data>"
> +EOF
> + done
> + tail -n 1 repodata.adding/repomd.xml >>repodata/repomd.xml
> + gpg --detach-sign --armor repodata/repomd.xml
> +
> + # copy the packages to S3
> + for file in $rpmpath/*.rpm ; do
> + $aws s3 cp --acl public-read $file "$s3/$repopath/$file"
> + done
> +
> + # update the metadata at the S3
> + $aws s3 sync --acl public-read repodata "$s3/$repopath/repodata"
> +
> + # unlock the publishing
> + $rm_file $ws_lockfile
> +}
> +
> +if [ "$os" == "ubuntu" -o "$os" == "debian" ]; then
> + pack_deb
> +elif [ "$os" == "centos" -o "$os" == "fedora" ]; then
> + pack_rpm
> +else
> + echo "USAGE: given OS '$os' is not supported, use any single from the list: $alloss"
> + usage
> + exit 1
> +fi
> --
> 2.17.1
>
--
Best regards,
IM