Tarantool development patches archive
 help / color / mirror / Atom feed
From: Serge Petrenko <sergepetrenko@tarantool.org>
To: Vladislav Shpilevoy <v.shpilevoy@tarantool.org>,
	tarantool-patches@dev.tarantool.org, gorcunov@gmail.com
Subject: Re: [Tarantool-patches] [PATCH v2 03/10] wal: don't touch box.cfg.wal_dir more than once
Date: Fri, 4 Sep 2020 11:20:40 +0300	[thread overview]
Message-ID: <8e0f0cb1-cb5a-85de-8c4b-86cdd1fede29@tarantool.org> (raw)
In-Reply-To: <7bc33303893d7bce5b2c3d4d3c6f44cc61d64434.1599173312.git.v.shpilevoy@tarantool.org>


04.09.2020 01:51, Vladislav Shpilevoy пишет:
> Relay.cc and box.cc obtained box.cfg.wal_dir value using
> cfg_gets() call. To initialize WAL and create struct recovery
> objects.
>
> That is not only a bit dangerous (cfg_gets() uses Lua API and can
> throw a Lua error) and slow, but also not necessary - wal_dir
> parameter is constant, it can't be changed after instance start.
>
> It means, the value can be stored somewhere one time and then used
> without Lua.
>
> Main motivation is that the WAL directory path will be needed
> inside relay threads to restart their recovery iterators in the
> Raft patch. They can't use cfg_gets(), because Lua lives in TX
> thread. But can access a constant global variable, introduced in
> this patch (it existed before, but now has a method to get it).
>
> Needed for #1146


LGTM

> ---
>   src/box/box.cc   | 9 ++++-----
>   src/box/relay.cc | 7 ++-----
>   src/box/wal.c    | 6 ++++++
>   src/box/wal.h    | 7 +++++++
>   4 files changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/src/box/box.cc b/src/box/box.cc
> index 0813603c0..eeb00d5e2 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -2395,8 +2395,7 @@ local_recovery(const struct tt_uuid *instance_uuid,
>   	wal_stream_create(&wal_stream);
>   
>   	struct recovery *recovery;
> -	recovery = recovery_new(cfg_gets("wal_dir"),
> -				cfg_geti("force_recovery"),
> +	recovery = recovery_new(wal_dir(), cfg_geti("force_recovery"),
>   				checkpoint_vclock);
>   
>   	/*
> @@ -2469,7 +2468,7 @@ local_recovery(const struct tt_uuid *instance_uuid,
>   		recovery_follow_local(recovery, &wal_stream.base, "hot_standby",
>   				      cfg_getd("wal_dir_rescan_delay"));
>   		while (true) {
> -			if (path_lock(cfg_gets("wal_dir"), &wal_dir_lock))
> +			if (path_lock(wal_dir(), &wal_dir_lock))
>   				diag_raise();
>   			if (wal_dir_lock >= 0)
>   				break;
> @@ -2616,7 +2615,7 @@ box_cfg_xc(void)
>   	 * Lock the write ahead log directory to avoid multiple
>   	 * instances running in the same dir.
>   	 */
> -	if (path_lock(cfg_gets("wal_dir"), &wal_dir_lock) < 0)
> +	if (path_lock(wal_dir(), &wal_dir_lock) < 0)
>   		diag_raise();
>   	if (wal_dir_lock < 0) {
>   		/**
> @@ -2625,7 +2624,7 @@ box_cfg_xc(void)
>   		 * WAL dir must contain at least one xlog.
>   		 */
>   		if (!cfg_geti("hot_standby") || checkpoint == NULL)
> -			tnt_raise(ClientError, ER_ALREADY_RUNNING, cfg_gets("wal_dir"));
> +			tnt_raise(ClientError, ER_ALREADY_RUNNING, wal_dir());
>   	}
>   
>   	struct journal bootstrap_journal;
> diff --git a/src/box/relay.cc b/src/box/relay.cc
> index a7843a8c2..124b0f52f 100644
> --- a/src/box/relay.cc
> +++ b/src/box/relay.cc
> @@ -34,7 +34,6 @@
>   #include "tt_static.h"
>   #include "scoped_guard.h"
>   #include "cbus.h"
> -#include "cfg.h"
>   #include "errinj.h"
>   #include "fiber.h"
>   #include "say.h"
> @@ -369,8 +368,7 @@ relay_final_join(int fd, uint64_t sync, struct vclock *start_vclock,
>   		relay_delete(relay);
>   	});
>   
> -	relay->r = recovery_new(cfg_gets("wal_dir"), false,
> -			       start_vclock);
> +	relay->r = recovery_new(wal_dir(), false, start_vclock);
>   	vclock_copy(&relay->stop_vclock, stop_vclock);
>   
>   	int rc = cord_costart(&relay->cord, "final_join",
> @@ -731,8 +729,7 @@ relay_subscribe(struct replica *replica, int fd, uint64_t sync,
>   	});
>   
>   	vclock_copy(&relay->local_vclock_at_subscribe, &replicaset.vclock);
> -	relay->r = recovery_new(cfg_gets("wal_dir"), false,
> -			        replica_clock);
> +	relay->r = recovery_new(wal_dir(), false, replica_clock);
>   	vclock_copy(&relay->tx.vclock, replica_clock);
>   	relay->version_id = replica_version_id;
>   
> diff --git a/src/box/wal.c b/src/box/wal.c
> index 045006b60..e181e58d9 100644
> --- a/src/box/wal.c
> +++ b/src/box/wal.c
> @@ -201,6 +201,12 @@ wal_mode(void)
>   	return wal_writer_singleton.wal_mode;
>   }
>   
> +const char *
> +wal_dir(void)
> +{
> +	return wal_writer_singleton.wal_dir.dirname;
> +}
> +
>   static void
>   wal_write_to_disk(struct cmsg *msg);
>   
> diff --git a/src/box/wal.h b/src/box/wal.h
> index 9d0cada46..581306fe9 100644
> --- a/src/box/wal.h
> +++ b/src/box/wal.h
> @@ -98,6 +98,13 @@ wal_enable(void);
>   void
>   wal_free(void);
>   
> +/**
> + * Get WAL directory path. The value never changes after box is
> + * configured first time. Safe to use from multiple threads.
> + */
> +const char *
> +wal_dir(void);
> +
>   struct wal_watcher_msg {
>   	struct cmsg cmsg;
>   	struct wal_watcher *watcher;

-- 
Serge Petrenko

  reply	other threads:[~2020-09-04  8:20 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-03 22:51 [Tarantool-patches] [PATCH v2 00/10] dRaft Vladislav Shpilevoy
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 01/10] applier: store instance_id in struct applier Vladislav Shpilevoy
2020-09-04  8:13   ` Serge Petrenko
2020-09-07 22:54     ` Vladislav Shpilevoy
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 10/10] raft: introduce box.info.raft Vladislav Shpilevoy
2020-09-04 11:38   ` Serge Petrenko
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 02/10] box: introduce summary RO flag Vladislav Shpilevoy
2020-09-04  8:17   ` Serge Petrenko
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 03/10] wal: don't touch box.cfg.wal_dir more than once Vladislav Shpilevoy
2020-09-04  8:20   ` Serge Petrenko [this message]
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 04/10] replication: track registered replica count Vladislav Shpilevoy
2020-09-04  8:24   ` Serge Petrenko
2020-09-07 22:54     ` Vladislav Shpilevoy
2020-09-07 15:45   ` Sergey Ostanevich
2020-09-07 22:54     ` Vladislav Shpilevoy
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 05/10] raft: introduce persistent raft state Vladislav Shpilevoy
2020-09-04  8:59   ` Serge Petrenko
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 06/10] raft: introduce box.cfg.raft_* options Vladislav Shpilevoy
2020-09-04  9:07   ` Serge Petrenko
2020-09-07 22:55     ` Vladislav Shpilevoy
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 07/10] raft: relay status updates to followers Vladislav Shpilevoy
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 08/10] [tosquash] raft: pass source instance_id to raft_process_msg() Vladislav Shpilevoy
2020-09-04  9:22   ` Serge Petrenko
2020-09-03 22:51 ` [Tarantool-patches] [PATCH v2 09/10] raft: introduce state machine Vladislav Shpilevoy
2020-09-04 11:36   ` Serge Petrenko
2020-09-07 22:57     ` Vladislav Shpilevoy
2020-09-09  8:04       ` Serge Petrenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8e0f0cb1-cb5a-85de-8c4b-86cdd1fede29@tarantool.org \
    --to=sergepetrenko@tarantool.org \
    --cc=gorcunov@gmail.com \
    --cc=tarantool-patches@dev.tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH v2 03/10] wal: don'\''t touch box.cfg.wal_dir more than once' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox