[Tarantool-patches] [PATCH v3] Testing changes after 2nd review on S3

Igor Munkin imun at tarantool.org
Mon Jan 13 13:29:31 MSK 2020


Sasha,

Thanks for the patch! It LGTM to me, but I left some polish-aimed
comments below, please consider them.

Furthermore, I Cced Sasha Tu. to take a look on it since the patch is a
large one an I can overlooked something vital.

On 16.12.19, Alexander V. Tikhonov wrote:
> ---
> 
> Github: https://github.com/tarantool/tarantool/tree/avtikhon/gh-3380-push-packages-s3-full-ci
> Issue: https://github.com/tarantool/tarantool/issues/3380
> 
> v1: https://lists.tarantool.org/pipermail/tarantool-patches/2019-October/012021.html
> v2: https://lists.tarantool.org/pipermail/tarantool-patches/2019-November/012352.html
> 
> Changes v2:
> - made changes in script from draft to pre-release stages
> 
> Changes v3:
> - corrected code style, minor updates
> - script is ready for release

Please consider adding the changelog in the reverse order.

> 
>  tools/add_pack_s3_repo.sh | 381 +++++++++++++++++++++-----------------
>  1 file changed, 209 insertions(+), 172 deletions(-)
> 

General comments:
* Please choose a single code style for function definition. Currently
  you mix two following approaches:
|  usage()
|  {
  and
|  function pack_rpm {

> diff --git a/tools/add_pack_s3_repo.sh b/tools/add_pack_s3_repo.sh
> index 2316b9015..280a07ab0 100755
> --- a/tools/add_pack_s3_repo.sh
> +++ b/tools/add_pack_s3_repo.sh
> @@ -4,8 +4,12 @@ set -e
>  rm_file='rm -f'
>  rm_dir='rm -rf'
>  mk_dir='mkdir -p'
> +ws_prefix=/tmp/tarantool_repo_s3
>  
> -alloss='ubuntu debian centos el fedora'
> +alloss='ubuntu debian el fedora'
> +product=tarantool
> +# the path with binaries either repository
> +repo=.
>  
>  get_os_dists()
>  {
> @@ -16,7 +20,7 @@ get_os_dists()
>          alldists='trusty xenial cosmic disco bionic eoan'

Minor: IMHO it's more convenient to sort distros in the release date
order.

>      elif [ "$os" == "debian" ]; then
>          alldists='jessie stretch buster'
> -    elif [ "$os" == "centos" -o "$os" == "el" ]; then
> +    elif [ "$os" == "el" ]; then
>          alldists='6 7 8'
>      elif [ "$os" == "fedora" ]; then
>          alldists='27 28 29 30 31'
> @@ -25,16 +29,45 @@ get_os_dists()
>      echo "$alldists"
>  }
>  
> -ws_prefix=/tmp/tarantool_repo_s3
> -create_lockfile()
> +prepare_ws()
>  {
> -    lockfile -l 1000 $1
> +    # temporary lock the publication to the repository
> +    ws=${ws_prefix}_${branch}_${os}
> +    if [ "$os" != "ubuntu" -a "$os" != "debian" ]; then
> +        ws=${ws}_${option_dist}
> +    fi

I guess the suffix can be constructed within pack_{rpm,deb} subroutine
(i.e. in prepare_ws caller) and passed as an argument to prepare_ws.
Consider the following:
| prepare_ws ${os}_${option_dist} # for deb based distros
| prepare_ws ${os}                # for rpm based distros

> +    ws_lockfile=${ws}.lock
> +    if [ -f $ws_lockfile ]; then
> +        old_proc=$(cat $ws_lockfile)
> +    fi
> +    lockfile -l 60 $ws_lockfile
> +    chmod u+w $ws_lockfile && echo $$ >$ws_lockfile && chmod u-w $ws_lockfile
> +    if [ "$old_proc" != ""  -a "$old_proc" != "0" ]; then
> +        kill -9 $old_proc >/dev/null 2>&1 || true
> +    fi
> +
> +    # create temporary workspace with repository copy
> +    $rm_dir $ws
> +    $mk_dir $ws
>  }
>  
>  usage()
>  {
>      cat <<EOF
> -Usage: $0 -b <S3 bucket> -o <OS name> -d <OS distribuition> [-p <product>] <path>
> +Usage for store package binaries from the given path:
> +    $0 -b=<S3 bucket> -o=<OS name> -d=<OS distribuition> [-p=<product>] <path to package binaries>
> +
> +Usage for mirroring Debian|Ubuntu OS repositories:
> +    $0 -b=<S3 bucket> -o=<OS name> [-p=<product>] <path to 'pool' subdirectory with packages repository>
> +
> +Arguments:
> +    <path>
> +         Path points to the directory with deb/prm packages to be used.
> +         Script can be used in one of 2 modes:
> +          - path with binaries packages for a single distribution
> +          - path with 'pool' subdirectory with APT repository (only: debian|ubuntu), like:
> +                /var/spool/apt-mirror/mirror/packagecloud.io/tarantool/$branch/$os
> +
>  Options:
>      -b|--bucket
>          MCS S3 bucket which will be used for storing the packages.
> @@ -44,24 +77,19 @@ Options:
>              2.2: 2_2
>              1.10: 1_10
>      -o|--os
> -        OS to be checked, one of the list (NOTE: centos == el):
> +        OS to be checked, one of the list:
>              $alloss
>      -d|--distribution
>          Distribution appropriate to the given OS:
>  EOF
>      for os in $alloss ; do
> -        echo "            $os: <"`get_os_dists $os`">"
> +        echo "            $os: <"$(get_os_dists $os)">"
>      done
>      cat <<EOF
>      -p|--product
>           Product name to be packed with, default name is 'tarantool'
>      -h|--help
>           Usage help message
> -    <path>
> -         Path points to the directory with deb/prm packages to be used.
> -         Script can be used in one of 2 modes:
> -          - path with binaries packages for a single distribution
> -	  - path with 'pool' directory with APT repository (only: debian|ubuntu)
>  EOF
>  }
>  
> @@ -70,7 +98,7 @@ do
>  case $i in
>      -b=*|--bucket=*)
>      branch="${i#*=}"
> -    if [ "$branch" != "2x" -a "$branch" != "2_3" -a "$branch" != "2_2" -a "$branch" != "1_10" ]; then
> +    if echo "$branch" | grep -qvP '^(1_10|2(x|_[2-4]))$' ; then
>          echo "ERROR: bucket '$branch' is not supported"
>          usage
>          exit 1
> @@ -79,9 +107,6 @@ case $i in
>      ;;
>      -o=*|--os=*)
>      os="${i#*=}"
> -    if [ "$os" == "el" ]; then
> -        os=centos
> -    fi
>      if ! echo $alloss | grep -F -q -w $os ; then
>          echo "ERROR: OS '$os' is not supported"
>          usage
> @@ -90,7 +115,7 @@ case $i in
>      shift # past argument=value
>      ;;
>      -d=*|--distribution=*)
> -    DIST="${i#*=}"
> +    option_dist="${i#*=}"
>      shift # past argument=value
>      ;;
>      -p=*|--product=*)
> @@ -120,27 +145,68 @@ if [ "$os" == "" ]; then
>      usage
>      exit 1
>  fi
> -alldists=`get_os_dists $os`
> -if [ -n "$DIST" ] && ! echo $alldists | grep -F -q -w $DIST ; then
> -    echo "ERROR: set distribution at options '$DIST' not found at supported list '$alldists'"
> +alldists=$(get_os_dists $os)
> +if [ -n "$option_dist" ] && ! echo $alldists | grep -F -q -w $option_dist ; then
> +    echo "ERROR: set distribution at options '$option_dist' not found at supported list '$alldists'"
>      usage
>      exit 1
>  fi
>  
> -# set the path with binaries
> -product=$product
> -if [ "$product" == "" ]; then
> -    product=tarantool
> -fi
> -proddir=`echo $product | head -c 1`
> +# set the subpath with binaries based on literal character of the product name
> +proddir=$(echo $product | head -c 1)
>  
> -# set the path with binaries
> -if [ "$repo" == "" ]; then
> -    repo=.
> -fi
> -
> -aws='aws --endpoint-url https://hb.bizmrg.com'
> +# AWS defines
> +aws='aws --endpoint-url https://hb.bizmrg.com s3'
>  s3="s3://tarantool_repo/$branch/$os"

Minor: s3 variable name seems to be an ambiguous one here. IMHO, bucket
looks the way better.

> +aws_cp_to_s3="$aws cp --acl public-read"
> +aws_sync_to_s3="$aws sync --acl public-read"

Nice change, but other commands also interact with S3. I propose to
change the naming to the following one:
| <aws_<action>_public

> +
> +update_packfile()
> +{
> +    packfile=$1
> +    packtype=$2
> +
> +    locpackfile=$(echo $packfile | sed "s#^$ws/##g")
> +    # register DEB/DSC pack file to Packages/Sources file
> +    reprepro -Vb . include$packtype $loop_dist $packfile

Well, it's really bad idea to use external loop variable within
function. You can pass it as an explicit argument, please do it.

> +    # reprepro copied DEB/DSC file to component which is not needed
> +    $rm_dir $debdir/$component
> +    # to have all sources avoid reprepro set DEB/DSC file to its own registry
> +    $rm_dir db
> +}
> +

Minor: Since update_{packfile,metadata} are used only for deb packages,
it can be showed in naming: update_deb_{packfile,metadata}.

> +update_metadata()
> +{
> +    packpath=$1
> +    packtype=$2
> +
> +    if [ ! -f $packpath.saved ] ; then
> +        # get the latest Sources file from S3
> +        $aws ls "$s3/$packpath" 2>/dev/null && \
> +            $aws cp "$s3/$packpath" $packpath.saved || \
> +            touch $packpath.saved
> +    fi
> +
> +    if [ "$packtype" == "dsc" ]; then
> +        # WORKAROUND: unknown why, but reprepro doesn`t save the Sources file
> +        gunzip -c $packpath.gz >$packpath
> +        # check if the DSC file already exists in Sources from S3
> +        hash=$(grep '^Checksums-Sha256:' -A3 $packpath | \
> +            tail -n 1 | awk '{print $1}')
> +        if grep " $hash .*\.dsc$" $packpath.saved ; then
> +            echo "WARNING: DSC file already registered in S3!"
> +            return 1
> +        fi
> +    elif [ "$packtype" == "deb" ]; then
> +        # check if the DEB file already exists in Packages from S3
> +        if grep "^$(grep '^SHA256: ' $packages)$" $packages.saved ; then
> +            echo "WARNING: DEB file already registered in S3!"
> +            return 1
> +        fi
> +    fi
> +    # store the new DEB entry
> +    cat $packpath >>$packpath.saved
> +}
>  
>  # The 'pack_deb' function especialy created for DEB packages. It works
>  # with DEB packing OS like Ubuntu, Debian. It is based on globaly known
> @@ -158,32 +224,27 @@ function pack_deb {
>      # debian has special directory 'pool' for packages
>      debdir=pool
>  
> -    # get packages from pointed location either mirror path
> -    if [ "$repo" == "" ] ; then
> -        repo=/var/spool/apt-mirror/mirror/packagecloud.io/tarantool/$branch/$os
> -    fi
> -    if [ ! -d $repo/$debdir ] && ( [ "$DIST" == "" ] || ! ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ) ; then
> +    # get packages from pointed location
> +    if [ ! -d $repo/$debdir ] && \
> +        ( [ "$option_dist" == "" ] || \
> +            ! ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ) ; then
>          echo "ERROR: Current '$repo' path doesn't have any of the following:"
> -        echo "Usage with set distribuition with option '-d' and packages: $0 [path with *.deb *.dsc *.tar.*z files]"
> -        echo "Usage with repositories: $0 [path to repository with '$debdir' subdirectory]"
> +        echo " - $0 run option '-d' and DEB packages in path"
> +        echo " - 'pool' subdirectory with APT repositories"
> +        usage
>          exit 1
>      fi
>  
> -    # temporary lock the publication to the repository
> -    ws=${ws_prefix}_${branch}_${os}
> -    ws_lockfile=${ws}.lock
> -    create_lockfile $ws_lockfile
> -
> -    # create temporary workspace with repository copy
> -    $rm_dir $ws
> -    $mk_dir $ws
> +    # prepare the workspace
> +    prepare_ws
>  
>      # script works in one of 2 modes:
>      # - path with binaries packages for a single distribution
>      # - path with 'pool' directory with APT repository
> -    if [ "$DIST" != "" ] && ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ; then
> +    if [ "$option_dist" != "" ] && \
> +            ls $repo/*.deb $repo/*.dsc $repo/*.tar.*z >/dev/null 2>&1 ; then
>          # copy single distribution with binaries packages
> -        repopath=$ws/pool/${DIST}/main/$proddir/$product
> +        repopath=$ws/pool/${option_dist}/$component/$proddir/$product
>          $mk_dir ${repopath}
>          cp $repo/*.deb $repo/*.dsc $repo/*.tar.*z $repopath/.
>      elif [ -d $repo/$debdir ]; then
> @@ -191,10 +252,12 @@ function pack_deb {
>          cp -rf $repo/$debdir $ws/.
>      else
>          echo "ERROR: neither distribution option '-d' with files $repo/*.deb $repo/*.dsc $repo/*.tar.*z set nor '$repo/$debdir' path found"
> -	usage
> -	$rm_file $wslock
> -	exit 1
> +        usage
> +        $rm_file $wslock
> +        exit 1
>      fi
> +
> +    swd=$PWD
>      cd $ws

Minor: I see no reason to avoid using more fitting pushd/popd here.

>  
>      # create the configuration file for 'reprepro' tool
> @@ -202,14 +265,14 @@ function pack_deb {
>      $rm_dir $confpath
>      $mk_dir $confpath
>  
> -    for dist in $alldists ; do
> +    for loop_dist in $alldists ; do
>          cat <<EOF >>$confpath/distributions
>  Origin: Tarantool
>  Label: tarantool.org
>  Suite: stable
> -Codename: $dist
> +Codename: $loop_dist
>  Architectures: amd64 source
> -Components: main
> +Components: $component
>  Description: Unofficial Ubuntu Packages maintained by Tarantool
>  SignWith: 91B625E5
>  DebIndices: Packages Release . .gz .bz2
> @@ -217,84 +280,49 @@ UDebIndices: Packages . .gz .bz2
>  DscIndices: Sources Release .gz .bz2
>  
>  EOF
> -done
> +    done
>  
>      # create standalone repository with separate components
> -    for dist in $alldists ; do
> -        echo =================== DISTRIBUTION: $dist =========================
> +    for loop_dist in $alldists ; do
> +        echo ================ DISTRIBUTION: $loop_dist ====================
>          updated_deb=0
>          updated_dsc=0
>  
>          # 1(binaries). use reprepro tool to generate Packages file
> -        for deb in $ws/$debdir/$dist/$component/*/*/*.deb ; do
> +        for deb in $ws/$debdir/$loop_dist/$component/*/*/*.deb ; do
>              [ -f $deb ] || continue
> -            locdeb=`echo $deb | sed "s#^$ws\/##g"`
> -            echo "DEB: $deb"
> -            # register DEB file to Packages file
> -            reprepro -Vb . includedeb $dist $deb
> -            # reprepro copied DEB file to local component which is not needed
> -            $rm_dir $debdir/$component
> -            # to have all packages avoid reprepro set DEB file to its own registry
> -            $rm_dir db
> -            # copy Packages file to avoid of removing by the new DEB version
> -            for packages in dists/$dist/$component/binary-*/Packages ; do
> -                if [ ! -f $packages.saved ] ; then
> -                    # get the latest Packages file from S3
> -                    $aws s3 ls "$s3/$packages" 2>/dev/null && \
> -                        $aws s3 cp --acl public-read \
> -                        "$s3/$packages" $packages.saved || \
> -                        touch $packages.saved
> -                fi
> -                # check if the DEB file already exists in Packages from S3
> -                if grep "^`grep "^SHA256: " $packages`$" $packages.saved ; then
> -                    echo "WARNING: DEB file already registered in S3!"
> -                    continue
> -                fi
> -                # store the new DEB entry
> -                cat $packages >>$packages.saved
> -                # save the registered DEB file to S3
> -                $aws s3 cp --acl public-read $deb $s3/$locdeb
> +            # regenerate DEB pack
> +            update_packfile $deb deb
> +            echo "Regenerated DEB file: $locpackfile"
> +            for packages in dists/$loop_dist/$component/binary-*/Packages ; do
> +                # copy Packages file to avoid of removing by the new DEB version
> +                # update metadata 'Packages' files
> +                if ! update_metadata $packages deb ; then continue ; fi

I see the following approach to be more convenient for oneline if:
| update_metadata $packages deb || continue

>                  updated_deb=1
>              done
> +            # save the registered DEB file to S3
> +            if [ "$updated_deb" == 1 ]; then
> +                $aws_cp_to_s3 $deb $s3/$locpackfile
> +            fi
>          done
>  
>          # 1(sources). use reprepro tool to generate Sources file
> -        for dsc in $ws/$debdir/$dist/$component/*/*/*.dsc ; do
> +        for dsc in $ws/$debdir/$loop_dist/$component/*/*/*.dsc ; do
>              [ -f $dsc ] || continue
> -            locdsc=`echo $dsc | sed "s#^$ws\/##g"`
> -            echo "DSC: $dsc"
> -            # register DSC file to Sources file
> -            reprepro -Vb . includedsc $dist $dsc
> -            # reprepro copied DSC file to component which is not needed
> -            $rm_dir $debdir/$component
> -            # to have all sources avoid reprepro set DSC file to its own registry
> -            $rm_dir db
> +            # regenerate DSC pack
> +            update_packfile $dsc dsc
> +            echo "Regenerated DSC file: $locpackfile"
>              # copy Sources file to avoid of removing by the new DSC version
> -            sources=dists/$dist/$component/source/Sources
> -            if [ ! -f $sources.saved ] ; then
> -                # get the latest Sources file from S3
> -                $aws s3 ls "$s3/$sources" && \
> -                    $aws s3 cp --acl public-read "$s3/$sources" $sources.saved || \
> -                    touch $sources.saved
> -            fi
> -            # WORKAROUND: unknown why, but reprepro doesn`t save the Sources file
> -            gunzip -c $sources.gz >$sources
> -            # check if the DSC file already exists in Sources from S3
> -            hash=`grep '^Checksums-Sha256:' -A3 $sources | \
> -                tail -n 1 | awk '{print $1}'`
> -            if grep " $hash .*\.dsc$" $sources.saved ; then
> -                echo "WARNING: DSC file already registered in S3!"
> -                continue
> -            fi
> -            # store the new DSC entry
> -            cat $sources >>$sources.saved
> -            # save the registered DSC file to S3
> -            $aws s3 cp --acl public-read $dsc $s3/$locdsc
> -            tarxz=`echo $locdsc | sed 's#\.dsc$#.debian.tar.xz#g'`
> -            $aws s3 cp --acl public-read $ws/$tarxz "$s3/$tarxz"
> -            orig=`echo $locdsc | sed 's#-1\.dsc$#.orig.tar.xz#g'`
> -            $aws s3 cp --acl public-read $ws/$orig "$s3/$orig"
> +            # update metadata 'Sources' file
> +            if ! update_metadata dists/$loop_dist/$component/source/Sources dsc \
> +                ; then continue ; fi

I see the following approach to be more convenient for oneline if:
| update_metadata $dist/$loop_dist/$component/source/Sources dsc || continue

>              updated_dsc=1
> +            # save the registered DSC file to S3
> +            $aws_cp_to_s3 $dsc $s3/$locpackfile
> +            tarxz=$(echo $locpackfile | sed 's#\.dsc$#.debian.tar.xz#g')
> +            $aws_cp_to_s3 $ws/$tarxz "$s3/$tarxz"
> +            orig=$(echo $locpackfile | sed 's#-1\.dsc$#.orig.tar.xz#g')
> +            $aws_cp_to_s3 $ws/$orig "$s3/$orig"
>          done
>  
>          # check if any DEB/DSC files were newly registered
> @@ -302,30 +330,30 @@ done
>              continue || echo "Updating dists"
>  
>          # finalize the Packages file
> -        for packages in dists/$dist/$component/binary-*/Packages ; do
> +        for packages in dists/$loop_dist/$component/binary-*/Packages ; do
>              mv $packages.saved $packages
>          done
>  
>          # 2(binaries). update Packages file archives
> -        for packpath in dists/$dist/$component/binary-* ; do
> +        for packpath in dists/$loop_dist/$component/binary-* ; do
>              pushd $packpath
> -            sed "s#Filename: $debdir/$component/#Filename: $debdir/$dist/$component/#g" -i Packages
> +            sed "s#Filename: $debdir/$component/#Filename: $debdir/$loop_dist/$component/#g" -i Packages
>              bzip2 -c Packages >Packages.bz2
>              gzip -c Packages >Packages.gz
>              popd
>          done
>  
>          # 2(sources). update Sources file archives
> -        pushd dists/$dist/$component/source
> -        sed "s#Directory: $debdir/$component/#Directory: $debdir/$dist/$component/#g" -i Sources
> +        pushd dists/$loop_dist/$component/source
> +        sed "s#Directory: $debdir/$component/#Directory: $debdir/$loop_dist/$component/#g" -i Sources
>          bzip2 -c Sources >Sources.bz2
>          gzip -c Sources >Sources.gz
>          popd
>  
>          # 3. update checksums entries of the Packages* files in *Release files
> -	# NOTE: it is stable structure of the *Release files when the checksum
> -	#       entries in it in the following way:
> -	# MD5Sum:
> +        # NOTE: it is stable structure of the *Release files when the checksum
> +        #       entries in it in the following way:
> +        # MD5Sum:
>          #  <checksum> <size> <file orig>
>          #  <checksum> <size> <file debian>
>          # SHA1:
> @@ -334,14 +362,14 @@ done
>          # SHA256:
>          #  <checksum> <size> <file orig>
>          #  <checksum> <size> <file debian>
> -	#       The script bellow puts 'md5' value at the 1st found file entry,
> -	#       'sha1' - at the 2nd and 'sha256' at the 3rd
> -        pushd dists/$dist
> -        for file in `grep " $component/" Release | awk '{print $3}' | sort -u` ; do
> -            sz=`stat -c "%s" $file`
> -            md5=`md5sum $file | awk '{print $1}'`
> -            sha1=`sha1sum $file | awk '{print $1}'`
> -            sha256=`sha256sum $file | awk '{print $1}'`
> +        #       The script bellow puts 'md5' value at the 1st found file entry,
> +        #       'sha1' - at the 2nd and 'sha256' at the 3rd
> +        pushd dists/$loop_dist
> +        for file in $(grep " $component/" Release | awk '{print $3}' | sort -u) ; do
> +            sz=$(stat -c "%s" $file)
> +            md5=$(md5sum $file | awk '{print $1}')
> +            sha1=$(sha1sum $file | awk '{print $1}')
> +            sha256=$(sha256sum $file | awk '{print $1}')
>              awk 'BEGIN{c = 0} ; {
>                  if ($3 == p) {
>                      c = c + 1
> @@ -350,7 +378,7 @@ done
>                      if (c == 3) {print " " sh2 " " s " " p}
>                  } else {print $0}
>              }' p="$file" s="$sz" md="$md5" sh1="$sha1" sh2="$sha256" \
> -            Release >Release.new
> +                    Release >Release.new
>              mv Release.new Release
>          done
>          # resign the selfsigned InRelease file
> @@ -362,11 +390,13 @@ done
>          popd
>  
>          # 4. sync the latest distribution path changes to S3
> -        $aws s3 sync --acl public-read dists/$dist "$s3/dists/$dist"
> +        $aws_sync_to_s3 dists/$loop_dist "$s3/dists/$loop_dist"
>      done
>  
>      # unlock the publishing
>      $rm_file $ws_lockfile
> +
> +    cd $swd
>  }
>  
>  # The 'pack_rpm' function especialy created for RPM packages. It works
> @@ -380,29 +410,26 @@ done
>  #     http://mirrors.kernel.org/fedora/releases/30/Everything/x86_64/os/Packages/t/
>  function pack_rpm {
>      if ! ls $repo/*.rpm >/dev/null 2>&1 ; then
> -        echo "ERROR: Current '$repo' has:"
> -        ls -al $repo
> -        echo "Usage: $0 [path with *.rpm files]"
> +        echo "ERROR: Current '$repo' path doesn't have RPM packages in path"
> +        usage
>          exit 1
>      fi
>  
> -    # temporary lock the publication to the repository
> -    ws=${prefix_lockfile}_${branch}_${os}_${DIST}
> -    ws_lockfile=${ws}.lock
> -    create_lockfile $ws_lockfile
> +    # prepare the workspace
> +    prepare_ws
>  
> -    # create temporary workspace with packages copies
> -    $rm_dir $ws
> -    $mk_dir $ws
> +    # copy the needed package binaries to the workspace
>      cp $repo/*.rpm $ws/.
> +
> +    swd=$PWD
>      cd $ws

Minor: I see no reason to avoid using more fitting pushd/popd here.

>  
>      # set the paths
> -    if [ "$os" == "centos" ]; then
> -        repopath=$DIST/os/x86_64
> +    if [ "$os" == "el" ]; then
> +        repopath=$option_dist/os/x86_64
>          rpmpath=Packages
>      elif [ "$os" == "fedora" ]; then
> -        repopath=releases/$DIST/Everything/x86_64/os
> +        repopath=releases/$option_dist/Everything/x86_64/os
>          rpmpath=Packages/$proddir
>      fi
>      packpath=$repopath/$rpmpath
> @@ -410,53 +437,61 @@ function pack_rpm {
>      # prepare local repository with packages
>      $mk_dir $packpath
>      mv *.rpm $packpath/.
> -    cd $repopath
> +    cd $ws/$repopath

Minor: As discussed offline $ws is always an absolute path, but the
command above looks confusing to me considering the prior cd command.
Feel free to ignore.

>  
>      # copy the current metadata files from S3
>      mkdir repodata.base
> -    for file in `$aws s3 ls $s3/$repopath/repodata/ | awk '{print $NF}'` ; do
> -        $aws s3 ls $s3/$repopath/repodata/$file || continue
> -        $aws s3 cp $s3/$repopath/repodata/$file repodata.base/$file
> +    for file in $($aws ls $s3/$repopath/repodata/ | awk '{print $NF}') ; do
> +        $aws ls $s3/$repopath/repodata/$file || continue
> +        $aws cp $s3/$repopath/repodata/$file repodata.base/$file
>      done
>  
>      # create the new repository metadata files
> -    createrepo --no-database --update --workers=2 --compress-type=gz --simple-md-filenames .
> +    createrepo --no-database --update --workers=2 \
> +        --compress-type=gz --simple-md-filenames .
>      mv repodata repodata.adding
>  
>      # merge metadata files
>      mkdir repodata
>      head -n 2 repodata.adding/repomd.xml >repodata/repomd.xml
>      for file in filelists.xml other.xml primary.xml ; do
> -        # 1. take the 1st line only - to skip the line with number of packages which is not needed
> +        # 1. take the 1st line only - to skip the line with
> +        #    number of packages which is not needed
>          zcat repodata.adding/$file.gz | head -n 1 >repodata/$file
> -        # 2. take 2nd line with metadata tag and update the packages number in it
> +        # 2. take 2nd line with metadata tag and update
> +        #    the packages number in it
>          packsold=0
>          if [ -f repodata.base/$file.gz ] ; then
> -        packsold=`zcat repodata.base/$file.gz | head -n 2 | tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g'`
> +            packsold=$(zcat repodata.base/$file.gz | head -n 2 | \
> +                tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g')
>          fi
> -        packsnew=`zcat repodata.adding/$file.gz | head -n 2 | tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g'`
> +        packsnew=$(zcat repodata.adding/$file.gz | head -n 2 | \
> +            tail -n 1 | sed 's#.*packages="\(.*\)".*#\1#g')
>          packs=$(($packsold+$packsnew))
> -        zcat repodata.adding/$file.gz | head -n 2 | tail -n 1 | sed "s#packages=\".*\"#packages=\"$packs\"#g" >>repodata/$file
> +        zcat repodata.adding/$file.gz | head -n 2 | tail -n 1 | \
> +            sed "s#packages=\".*\"#packages=\"$packs\"#g" >>repodata/$file
>          # 3. take only 'package' tags from new file
> -        zcat repodata.adding/$file.gz | tail -n +3 | head -n -1 >>repodata/$file
> +        zcat repodata.adding/$file.gz | tail -n +3 | head -n -1 \
> +            >>repodata/$file
>          # 4. take only 'package' tags from old file if exists
>          if [ -f repodata.base/$file.gz ] ; then
> -            zcat repodata.base/$file.gz | tail -n +3 | head -n -1 >>repodata/$file
> +            zcat repodata.base/$file.gz | tail -n +3 | head -n -1 \
> +                >>repodata/$file
>          fi
>          # 5. take the last closing line with metadata tag
>          zcat repodata.adding/$file.gz | tail -n 1 >>repodata/$file
>  
>          # get the new data
> -        chsnew=`sha256sum repodata/$file | awk '{print $1}'`
> -        sz=`stat --printf="%s" repodata/$file`
> +        chsnew=$(sha256sum repodata/$file | awk '{print $1}')
> +        sz=$(stat --printf="%s" repodata/$file)
>          gzip repodata/$file
> -        chsgznew=`sha256sum repodata/$file.gz | awk '{print $1}'`
> -        szgz=`stat --printf="%s" repodata/$file.gz`
> -        timestamp=`date +%s -r repodata/$file.gz`
> +        chsgznew=$(sha256sum repodata/$file.gz | awk '{print $1}')
> +        szgz=$(stat --printf="%s" repodata/$file.gz)
> +        timestamp=$(date +%s -r repodata/$file.gz)
>  
>          # add info to repomd.xml file
> -        name=`echo $file | sed 's#\.xml$##g'`
> -	cat <<EOF >>repodata/repomd.xml
> +        name=$(echo $file | sed 's#\.xml$##g')
> +        cat <<EOF >>repodata/repomd.xml
>  <data type="$name">
>    <checksum type="sha256">$chsgznew</checksum>
>    <open-checksum type="sha256">$chsnew</open-checksum>
> @@ -472,19 +507,21 @@ EOF
>  
>      # copy the packages to S3
>      for file in $rpmpath/*.rpm ; do
> -        $aws s3 cp --acl public-read $file "$s3/$repopath/$file"
> +        $aws_cp_to_s3 $file "$s3/$repopath/$file"
>      done
>  
>      # update the metadata at the S3
> -    $aws s3 sync --acl public-read repodata "$s3/$repopath/repodata"
> +    $aws_sync_to_s3 repodata "$s3/$repopath/repodata"
>  
>      # unlock the publishing
>      $rm_file $ws_lockfile
> +
> +    cd $swd
>  }
>  
>  if [ "$os" == "ubuntu" -o "$os" == "debian" ]; then
>      pack_deb
> -elif [ "$os" == "centos" -o "$os" == "fedora" ]; then
> +elif [ "$os" == "el" -o "$os" == "fedora" ]; then
>      pack_rpm
>  else
>      echo "USAGE: given OS '$os' is not supported, use any single from the list: $alloss"
> -- 
> 2.17.1
> 

-- 
Best regards,
IM


More information about the Tarantool-patches mailing list