From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Wed, 28 Mar 2018 13:44:45 +0300 From: Vladimir Davydov Subject: Re: [commits] [tarantool] 01/04: bloom: use malloc for bitmap allocations Message-ID: <20180328104445.7bhnrvnn6mrnogof@esperanza> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180327203916.GA11829@atlas> To: Konstantin Osipov , commits@tarantool.org, tarantool-patches@freelists.org List-ID: On Tue, Mar 27, 2018 at 11:39:16PM +0300, Konstantin Osipov wrote: > * Vladimir Davydov [18/03/21 16:36]: > > bloom: use malloc for bitmap allocations > > It's not only pointless, it's harmful, since mmap() is ~100 times > slower than malloc(). > > The patch is OK to push, after explaining to me why had to add an > extra memset(). > > In a follow up patch you could use slab_arena though. I'm afraid not, because AFAICS slab_arena is single-threaded while bloom filters are allocated in a worker thread and freed in tx. > > @@ -42,41 +44,33 @@ bloom_create(struct bloom *bloom, uint32_t number_of_values, > > double false_positive_rate, struct quota *quota) > > { > > /* Optimal hash_count and bit count calculation */ > > - bloom->hash_count = (uint32_t) > > - (log(false_positive_rate) / log(0.5) + 0.99); > > - /* Number of bits */ > > - uint64_t m = (uint64_t) > > - (number_of_values * bloom->hash_count / log(2) + 0.5); > > - /* mmap page size */ > > - uint64_t page_size = sysconf(_SC_PAGE_SIZE); > > - /* Number of bits in one page */ > > - uint64_t b = page_size * CHAR_BIT; > > - /* number of pages, round up */ > > - uint64_t p = (uint32_t)((m + b - 1) / b); > > - /* bit array size in bytes */ > > - size_t mmap_size = p * page_size; > > - bloom->table_size = p * page_size / sizeof(struct bloom_block); > > - if (quota_use(quota, mmap_size) < 0) { > > - bloom->table = NULL; > > + uint16_t hash_count = ceil(log(false_positive_rate) / log(0.5)); > > + uint64_t bit_count = ceil(number_of_values * hash_count / log(2)); > > + uint32_t block_bits = CHAR_BIT * sizeof(struct bloom_block); > > + uint32_t block_count = (bit_count + block_bits - 1) / block_bits; > > + > > + size_t size = block_count * sizeof(struct bloom_block); > > + if (quota_use(quota, size) < 0) > > return -1; > > - } > > - bloom->table = (struct bloom_block *) > > - mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, > > - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > - if (bloom->table == MAP_FAILED) { > > - bloom->table = NULL; > > - quota_release(quota, mmap_size); > > + > > + bloom->table = malloc(size); > > + if (bloom->table == NULL) { > > + quota_release(quota, size); > > return -1; > > } > > + > > + bloom->table_size = block_count; > > + bloom->hash_count = hash_count; > > + memset(bloom->table, 0, size); > > Why do you need this memset()? Because mmap() returns a zeroed-out region of memory. With malloc() I have to zero it out manually to conform to the old behavior. OTOH I could use calloc() to avoid memset() in case malloc() actually falls back on mmap(). I think I'll replace malloc() with calloc() here.