diff mbox series

[v2,04/10] hashmap: introduce a new hashmap_partial_clear()

Message ID 061ab45a9bdae5352f62fa6e81bb21ae5c94b8df.1602549650.git.gitgitgadget@gmail.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Elijah Newren Oct. 13, 2020, 12:40 a.m. UTC
From: Elijah Newren <newren@gmail.com>

merge-ort is a heavy user of strmaps, which are built on hashmap.[ch].
reset_maps() in merge-ort was taking about 12% of overall runtime in my
testcase involving rebasing 35 patches of linux.git across a big rename.
reset_maps() was calling hashmap_free() followed by hashmap_init(),
meaning that not only was it freeing all the memory associated with each
of the strmaps just to immediately allocate a new array again, it was
allocating a new array that wasy likely smaller than needed (thus
resulting in later need to rehash things).  The ending size of the map
table on the previous commit was likely almost perfectly sized for the
next commit we wanted to pick, and not dropping and reallocating the
table immediately is a win.

Add some new API to hashmap to clear a hashmap of entries without
freeing map->table (and instead only zeroing it out like alloc_table()
would do, along with zeroing the count of items in the table and the
shrink_at field).

Signed-off-by: Elijah Newren <newren@gmail.com>
---
 hashmap.c | 39 +++++++++++++++++++++++++++------------
 hashmap.h | 13 ++++++++++++-
 2 files changed, 39 insertions(+), 13 deletions(-)

Comments

Jeff King Oct. 30, 2020, 1:41 p.m. UTC | #1
On Tue, Oct 13, 2020 at 12:40:44AM +0000, Elijah Newren via GitGitGadget wrote:

> merge-ort is a heavy user of strmaps, which are built on hashmap.[ch].
> reset_maps() in merge-ort was taking about 12% of overall runtime in my
> testcase involving rebasing 35 patches of linux.git across a big rename.
> reset_maps() was calling hashmap_free() followed by hashmap_init(),
> meaning that not only was it freeing all the memory associated with each
> of the strmaps just to immediately allocate a new array again, it was
> allocating a new array that wasy likely smaller than needed (thus

s/wasy/was/

> resulting in later need to rehash things).  The ending size of the map
> table on the previous commit was likely almost perfectly sized for the
> next commit we wanted to pick, and not dropping and reallocating the
> table immediately is a win.
> 
> Add some new API to hashmap to clear a hashmap of entries without
> freeing map->table (and instead only zeroing it out like alloc_table()
> would do, along with zeroing the count of items in the table and the
> shrink_at field).

This seems like a reasonable optimization to make, and doesn't make the
API significantly more complicated. I'd expect the allocation of actual
entry objects to dwarf the table allocation, but I guess:

  - you'll deal with the individual entries later using a mempool

  - it's not just the allocation, but the re-insertion of the entries as
    we grow

It would be nice if we had some actual perf numbers to report here, so
we could know exactly how much it was buying us. But I guess things are
a bit out-of-order there. You want to do this series first and then
build merge-ort on top as a user. We could introduce the basic data
structure first, then merge-ort, and then start applying optimizations
with real-world measurements. But I'm not sure it's worth the amount of
time you'd have to spend to reorganize in that way.

>  hashmap.c | 39 +++++++++++++++++++++++++++------------
>  hashmap.h | 13 ++++++++++++-

The implementation itself looks correct to me. I already mentioned my
thoughts on naming in patch 1.

-Peff
Elijah Newren Oct. 30, 2020, 4:03 p.m. UTC | #2
On Fri, Oct 30, 2020 at 6:41 AM Jeff King <peff@peff.net> wrote:
>
> On Tue, Oct 13, 2020 at 12:40:44AM +0000, Elijah Newren via GitGitGadget wrote:
>
> > merge-ort is a heavy user of strmaps, which are built on hashmap.[ch].
> > reset_maps() in merge-ort was taking about 12% of overall runtime in my
> > testcase involving rebasing 35 patches of linux.git across a big rename.
> > reset_maps() was calling hashmap_free() followed by hashmap_init(),
> > meaning that not only was it freeing all the memory associated with each
> > of the strmaps just to immediately allocate a new array again, it was
> > allocating a new array that wasy likely smaller than needed (thus
>
> s/wasy/was/

Thanks; will fix.

> > resulting in later need to rehash things).  The ending size of the map
> > table on the previous commit was likely almost perfectly sized for the
> > next commit we wanted to pick, and not dropping and reallocating the
> > table immediately is a win.
> >
> > Add some new API to hashmap to clear a hashmap of entries without
> > freeing map->table (and instead only zeroing it out like alloc_table()
> > would do, along with zeroing the count of items in the table and the
> > shrink_at field).
>
> This seems like a reasonable optimization to make, and doesn't make the
> API significantly more complicated. I'd expect the allocation of actual
> entry objects to dwarf the table allocation, but I guess:
>
>   - you'll deal with the individual entries later using a mempool
>
>   - it's not just the allocation, but the re-insertion of the entries as
>     we grow
>
> It would be nice if we had some actual perf numbers to report here, so
> we could know exactly how much it was buying us. But I guess things are
> a bit out-of-order there. You want to do this series first and then
> build merge-ort on top as a user. We could introduce the basic data
> structure first, then merge-ort, and then start applying optimizations
> with real-world measurements. But I'm not sure it's worth the amount of
> time you'd have to spend to reorganize in that way.

Yeah, the perf benefits didn't really come until I added a
strmap_clear() based on this, so as you discovered I put perf numbers
in patch 7 of this series.  Should I add a mention of the later commit
message at this point in the series?

> >  hashmap.c | 39 +++++++++++++++++++++++++++------------
> >  hashmap.h | 13 ++++++++++++-
>
> The implementation itself looks correct to me. I already mentioned my
> thoughts on naming in patch 1.

I'll circle back to that when I comment on patch 1...
Jeff King Nov. 3, 2020, 4:10 p.m. UTC | #3
On Fri, Oct 30, 2020 at 09:03:38AM -0700, Elijah Newren wrote:

> > It would be nice if we had some actual perf numbers to report here, so
> > we could know exactly how much it was buying us. But I guess things are
> > a bit out-of-order there. You want to do this series first and then
> > build merge-ort on top as a user. We could introduce the basic data
> > structure first, then merge-ort, and then start applying optimizations
> > with real-world measurements. But I'm not sure it's worth the amount of
> > time you'd have to spend to reorganize in that way.
> 
> Yeah, the perf benefits didn't really come until I added a
> strmap_clear() based on this, so as you discovered I put perf numbers
> in patch 7 of this series.  Should I add a mention of the later commit
> message at this point in the series?

Nah, I think it's OK as it is. That kind of thing matters more for
reviewing than when you find the commit later on. And we're already
discussing it during the review.

-Peff
diff mbox series

Patch

diff --git a/hashmap.c b/hashmap.c
index bb7c9979b8..922ed07954 100644
--- a/hashmap.c
+++ b/hashmap.c
@@ -174,22 +174,37 @@  void hashmap_init(struct hashmap *map, hashmap_cmp_fn equals_function,
 	map->do_count_items = 1;
 }
 
+static void free_individual_entries(struct hashmap *map, ssize_t entry_offset)
+{
+	struct hashmap_iter iter;
+	struct hashmap_entry *e;
+
+	hashmap_iter_init(map, &iter);
+	while ((e = hashmap_iter_next(&iter)))
+		/*
+		 * like container_of, but using caller-calculated
+		 * offset (caller being hashmap_free_entries)
+		 */
+		free((char *)e - entry_offset);
+}
+
+void hashmap_partial_clear_(struct hashmap *map, ssize_t entry_offset)
+{
+	if (!map || !map->table)
+		return;
+	if (entry_offset >= 0)  /* called by hashmap_clear_entries */
+		free_individual_entries(map, entry_offset);
+	memset(map->table, 0, map->tablesize * sizeof(struct hashmap_entry *));
+	map->shrink_at = 0;
+	map->private_size = 0;
+}
+
 void hashmap_free_(struct hashmap *map, ssize_t entry_offset)
 {
 	if (!map || !map->table)
 		return;
-	if (entry_offset >= 0) { /* called by hashmap_free_entries */
-		struct hashmap_iter iter;
-		struct hashmap_entry *e;
-
-		hashmap_iter_init(map, &iter);
-		while ((e = hashmap_iter_next(&iter)))
-			/*
-			 * like container_of, but using caller-calculated
-			 * offset (caller being hashmap_free_entries)
-			 */
-			free((char *)e - entry_offset);
-	}
+	if (entry_offset >= 0)  /* called by hashmap_free_entries */
+		free_individual_entries(map, entry_offset);
 	free(map->table);
 	memset(map, 0, sizeof(*map));
 }
diff --git a/hashmap.h b/hashmap.h
index 2994dc7a9c..056a8cda32 100644
--- a/hashmap.h
+++ b/hashmap.h
@@ -232,7 +232,8 @@  void hashmap_init(struct hashmap *map,
 			 const void *equals_function_data,
 			 size_t initial_size);
 
-/* internal function for freeing hashmap */
+/* internal functions for clearing or freeing hashmap */
+void hashmap_partial_clear_(struct hashmap *map, ssize_t offset);
 void hashmap_free_(struct hashmap *map, ssize_t offset);
 
 /*
@@ -265,6 +266,16 @@  void hashmap_free_(struct hashmap *map, ssize_t offset);
  */
 #define hashmap_free(map) hashmap_free_(map, -1)
 
+/*
+ * Basically the same as calling hashmap_free() followed by hashmap_init(),
+ * but doesn't incur the overhead of deallocating and reallocating
+ * map->table; it leaves map->table allocated and the same size but zeroes
+ * it out so it's ready for use again as an empty map.  As with
+ * hashmap_free(), you may need to free the entries yourself before calling
+ * this function.
+ */
+#define hashmap_partial_clear(map) hashmap_partial_clear_(map, -1)
+
 /*
  * Frees @map and all entries.  @type is the struct type of the entry
  * where @member is the hashmap_entry struct used to associate with @map.