From patchwork Mon Aug 5 10:01:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: JaeJoon Jung X-Patchwork-Id: 13753385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEA59C3DA4A for ; Mon, 5 Aug 2024 10:01:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 689396B009C; Mon, 5 Aug 2024 06:01:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 639906B00A3; Mon, 5 Aug 2024 06:01:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48C6A6B00A4; Mon, 5 Aug 2024 06:01:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 224E66B009C for ; Mon, 5 Aug 2024 06:01:35 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7D36A161B77 for ; Mon, 5 Aug 2024 10:01:34 +0000 (UTC) X-FDA: 82417749708.15.DBFAD48 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf02.hostedemail.com (Postfix) with ESMTP id 7192C8002A for ; Mon, 5 Aug 2024 10:01:32 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ak5dUInn; spf=pass (imf02.hostedemail.com: domain of rgbi3307@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=rgbi3307@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722852061; a=rsa-sha256; cv=none; b=s0jzthT2cq8COWg4ONj+zFCg596zovo67fMw4BcO7k7t/0HghILr7gonmg2h1TW32ITY9U EtJ3wuIKu5dBH9YYIpyeqKG5Fd2qozW4qEWynDv8ChItvgDFdPMYCyFTOI4hRC13Fnwkjv ZbWfpbwv6WyRJ3tQ7yMJNfxR8RWySKI= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ak5dUInn; spf=pass (imf02.hostedemail.com: domain of rgbi3307@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=rgbi3307@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722852061; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=CZA0bYigW3JpzWAOgGBsv7C8JuToIAwHT6lFuNhwkI8=; b=bxViVs9CTxhR7rUxvO1ya3o/97BEcpFGuNorAuCrUagvnjYiVIKclI/+jX/kh0H520OJiw DJNsUpeC0TAJRJUL7G71K8a4OXcC3AGn1boxXt8QZoQci49A+RsHWwyxd+h2OtnUYgvR+L h6mN1ivRp4jA5KDrQ13f0ySIEpk7h/s= Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-2cb5e0b020eso7847641a91.2 for ; Mon, 05 Aug 2024 03:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722852091; x=1723456891; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=CZA0bYigW3JpzWAOgGBsv7C8JuToIAwHT6lFuNhwkI8=; b=ak5dUInncPm26wOjMOht4m5EJ9XHq1qo1TQDVGFnPoPu/2hF45hXDYUFaCKt8DlM7q zoass8Oc5xJlN/EGWybkhEbCS1BQAn5m99+fMVazTe3iBwA508zbsvK8ItmZjS8QL+Yl KtYyDSqOuXFxfUasY5LcdaAgZyIOZ45EGUsCRxC0F3ZP0q2SxfVyYAjzOUHIv4aZyLJY aUev9+sutmS8PtjOTWVR+2NijJmi57nOvfvk9SnwcjJ9+wAsBnlNNOQFJQZDPi1s26uo Zlc4IxBmnJZ76rw38V8Ib98B9jmWIcNOpZfcOq+AlWZIY4N0kCXQiBgx40IqV7MAYGqP 4YSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722852091; x=1723456891; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CZA0bYigW3JpzWAOgGBsv7C8JuToIAwHT6lFuNhwkI8=; b=KtBU2cQUsNFkuhZSw9EQfRa5jiwCIIOfpDkOGSCTUjHsyDeVymrR/KWycNpA+Z87KZ 7whENbCsR3iA51D8Ijsv3abI8ET6KJbW7cE9KAxkj9cT9rvDQazj9kG5w357wlwZK3A5 mZly2nQFVqLxVIxyt3tTBQL6A8V1JIAhGU8tnR51ZGlm6158yE0J7sxfD8o5M9QMDOQ6 9f4yirQOucEtBk6qMaSdgU8yGcXpPDSAmC3zrjmcBxMBbozrPn8eXyjY/jLh535YXgBq RW/JI01Y1I4sBH5drnXKUQlIiMqGGuu1PN9HeMqXljdTtW2jQEDECfD0H2vW/1RxPPD9 Rb9A== X-Forwarded-Encrypted: i=1; AJvYcCW3dvwlJHsAPSN7tglDF6Nfa3l8LhdTEtCSgFr4ncAE2mSPWRwvqrSlN8zOZw+zaDJK8eJsdT6neDxXNDPTR17F/H8= X-Gm-Message-State: AOJu0Yx50K8lC2hMQY1nMWSr6cUxqi39AdfgKWg0F9vBrPR2RfEeXV0c G1n4mk9f8a87pj+g1tqgLKHOg1uE4maOD2Nkf4/3o4ZpyEjzdM5gHry3Qw== X-Google-Smtp-Source: AGHT+IHDrgilATsoibs786tnSqRZr4BRhuEQbW7AP1ygdyPOIgcJ3Ug0VmAsnUVLI8oB+Xt3vz/1HA== X-Received: by 2002:a17:90b:3884:b0:2cb:50ff:732e with SMTP id 98e67ed59e1d1-2cff957fe03mr12541843a91.42.1722852090662; Mon, 05 Aug 2024 03:01:30 -0700 (PDT) Received: from localhost.localdomain ([180.69.210.41]) by smtp.googlemail.com with ESMTPSA id 98e67ed59e1d1-2cffaf95a15sm6627132a91.15.2024.08.05.03.01.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Aug 2024 03:01:29 -0700 (PDT) From: JaeJoon Jung To: Linus Torvalds , Sasha Levin , "Liam R . Howlett" , Matthew Wilcox , Greg Kroah-Hartman Cc: JaeJoon Jung , linux-kernel@vger.kernel.org, linux-mm@kvack.org, maple-tree@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v2 1/2] lib/htree: Add locking interface to new Hash Tree Date: Mon, 5 Aug 2024 19:01:09 +0900 Message-Id: <20240805100109.14367-1-rgbi3307@gmail.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Stat-Signature: cds1n4m9k5z3oekr7jberhrrx5e8rt93 X-Rspamd-Queue-Id: 7192C8002A X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1722852092-169956 X-HE-Meta: U2FsdGVkX1/M7/5mQGgkTESAD0R10XvcZID1o+W++pjDYUvkj1//FksFQ43Tq1hCt52iKXKhsljDa0ClRGJobvzCPZ3ZalKEImFKdjkGCF3nWuSot3meZMKlzt3m1BYOIej+6VjcFVMSH93137rPIVBypUozD5dsJFEff40AUJnmJKRKtIBBQIic8UyGh9K0LFSZbfJ8d/mg+rrhxYljO9d0xsRnmPssO0gAad5/FEZ7E7J2SRSVP8FYOfDaipfKxrJxSDsvuMXWmc4bmbnmkiXsQ76/HViXbAb+WnzrP7hry+5kPYgZ1hc6d7UheEPzNZ4j+gsqor7XI97Mu+gLz3qnMUls5y1POo2e7tPUwARS9QwQc4dGIGWT0wDX1l+MEMmqwRPn2dlKtPpOVHWQhs712ZACJha9yAvVD+bBGuiewbZyo7/AUiDfXq2VKSd00JZClnusMZXCqyvIJTskvTlMyrVLdOAGdZId5QbolpVoj4yBZPRvt9xXxn5tanpna4I9AWQZiKC26g3Nt4nh82DORcrpcJDI7gdQxp1wK2OqOt4q/sFEsRN0MAE2Fd839EoOz8uoeCzDDnbzmKh/zb4GcidR921F2Bf74MdyTvJ3BsNzJjWUa/ff7ntVd+glpVBujZi8LrYQFuEJnn2zh8JurJ4fSGIjpSRpDKiAhQDBep05ztCCM4/xFHM5eW8eu3pz3FohrbEEZnWQBfmkdmVh5XA1scW0YsTCwpeahFBIVcdHaFYd2BYvhzRBnP0gNcs4xkyhfn7f5F06BgSEaDvxPD1tNDdV/6aNcziN/UJCnysohAlKDgWmoPak0xS6Nf1rIFHBNQ5jDyzSZEIxqkvrnl+AXpkyzr+R51KD66ly5n2nprumWSQoaVNLYPPbcv//Js/LPbxEYAAkFxcQyBmaGqFir3NfU2G/fygYOrgpcVPG2vHjyA12gzgXGPWsUNcE4fFdjYMpT10b35p G4N1huKk vovgZHGR93KN7ADJUWDpvsbkJPgpl4rmzuXfWftu6HcDaej+4KxZIWzMuWWgKG+r2szZj+6D7XUe8sNIq6pIMRompDluv5sfF62BdU7dHO6heG7ysrRyvL4ttUsLl/Jnak8u58MATwqdePbjY23Af5FMaep7uUaxd1aAVRcXx8Id97M7M7uJwCvGrj4U2yCWFgXzUTyB3y+bnwpOSwG5Dnx655+aKrnYsh4DwxoLhQCykE+QMRShioSlpRpHbGcaXjXV+VRrolgiZFAoUqDSKhb0X3UDusWrF1kOVQ8OQmGqZullJWNNkWqSUbRwpKyrYbVPaqnyO3ArYD/IghvZ0Donml+jBFnVVkJMATQrKog/pE0bVYo/Ucu+vaQLdKD0ZIU/P7OO9yowRrlOdfyagf9sBY0KAzSjvjZsE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implementation of new Hash Tree [PATCH v2] ------------------------------------------ Add spinlock.h and rcupdate.h in the include/linux/htree.h Add htree_root structue to interface locking. htree_root.ht_lock is spinlock_t to run spin_lock. htree_root.ht_first is __rcu type to access rcu API. Access the kernel standard API using macros. full source: ------------ https://github.com/kernel-bz/htree.git Manual(PDF): ------------ https://github.com/kernel-bz/htree/blob/main/docs/htree-20240802.pdf Signed-off-by: JaeJoon Jung --- include/linux/htree.h | 117 ++++++++++++++++++++++++++- lib/htree-test.c | 182 ++++++++++++++++++++++-------------------- lib/htree.c | 29 ++++++- 3 files changed, 238 insertions(+), 90 deletions(-) diff --git a/include/linux/htree.h b/include/linux/htree.h index c7b10c5b9bf2..c5bc2858e7fd 100644 --- a/include/linux/htree.h +++ b/include/linux/htree.h @@ -15,6 +15,8 @@ #include #include #include +#include +#include /* size of one hash tree struct: [16]Bytes @@ -112,6 +114,17 @@ enum ht_flags { /* htf: htree working flags (keep order) */ htf_freed, }; +struct htree_root { /* root: hash tree root */ + spinlock_t ht_lock; /* lock while update */ + struct hash_tree __rcu *ht_first; /* start of the hash tree */ +}; + +#define DEFINE_HTREE_ROOT(name) \ + struct htree_root name = { \ + .ht_lock = __SPIN_LOCK_UNLOCKED(name.ht_lock), \ + .ht_first = NULL, \ + } + #define HTREE_BITS_START 8 /* start of hash bits(default) */ #define HTREE_BITS_END 3 /* end of hash bits */ #define HTREE_BITS_SHIFT 3 /* shift of hash bits */ @@ -235,7 +248,7 @@ struct htree_data *ht_insert(struct htree_state *hts, struct hash_tree *htree, struct htree_data *ht_erase(struct htree_state *hts, struct hash_tree *htree, u64 index); -enum ht_flags ht_destroy(struct htree_state *hts, struct hash_tree *htree); +enum ht_flags ht_destroy_lock(struct htree_state *hts, struct htree_root *root); void ht_statis(struct htree_state *hts, struct hash_tree *htree, s32 *acnt, u64 *dcnt); @@ -243,5 +256,107 @@ void ht_statis(struct htree_state *hts, struct hash_tree *htree, struct htree_data *ht_most_index(struct htree_state *hts, struct hash_tree *htree); +/* spin_lock API */ +#define ht_trylock(xa) spin_trylock(&(xa)->ht_lock) +#define ht_lock(xa) spin_lock(&(xa)->ht_lock) +#define ht_unlock(xa) spin_unlock(&(xa)->ht_lock) +#define ht_lock_bh(xa) spin_lock_bh(&(xa)->ht_lock) +#define ht_unlock_bh(xa) spin_unlock_bh(&(xa)->ht_lock) +#define ht_lock_irq(xa) spin_lock_irq(&(xa)->ht_lock) +#define ht_unlock_irq(xa) spin_unlock_irq(&(xa)->ht_lock) +#define ht_lock_irqsave(xa, flags) \ + spin_lock_irqsave(&(xa)->ht_lock, flags) +#define ht_unlock_irqrestore(xa, flags) \ + spin_unlock_irqrestore(&(xa)->ht_lock, flags) +#define ht_lock_nested(xa, subclass) \ + spin_lock_nested(&(xa)->ht_lock, subclass) +#define ht_lock_bh_nested(xa, subclass) \ + spin_lock_bh_nested(&(xa)->ht_lock, subclass) +#define ht_lock_irq_nested(xa, subclass) \ + spin_lock_irq_nested(&(xa)->ht_lock, subclass) +#define ht_lock_irqsave_nested(xa, flags, subclass) \ + spin_lock_irqsave_nested(&(xa)->ht_lock, flags, subclass) + + +static inline void htree_root_alloc(struct htree_state *hts, + struct htree_root *root) +{ + rcu_assign_pointer(root->ht_first, ht_table_alloc(hts)); +} + +static inline struct hash_tree *htree_first_rcu(const struct htree_root *root) +{ + return rcu_dereference_check(root->ht_first, + lockdep_is_held(&root->ht_lock)); +} + +static inline struct hash_tree *htree_first_rcu_locked(const struct htree_root *root) +{ + return rcu_dereference_protected(root->ht_first, + lockdep_is_held(&root->ht_lock)); +} + + +static inline __must_check struct htree_data *ht_insert_lock( + struct htree_state *hts, struct htree_root *root, + struct htree_data *hdata, enum ht_flags req) +{ + ht_lock(root); + hdata = ht_insert(hts, htree_first_rcu_locked(root), hdata, req); + ht_unlock(root); + return hdata; +} + +static inline __must_check struct htree_data *ht_insert_lock_irq( + struct htree_state *hts, struct htree_root *root, + struct htree_data *hdata, enum ht_flags req) +{ + ht_lock_irq(root); + hdata = ht_insert(hts, htree_first_rcu_locked(root), hdata, req); + ht_unlock_irq(root); + return hdata; +} + +static inline __must_check struct htree_data *ht_insert_lock_irqsave( + struct htree_state *hts, struct htree_root *root, + struct htree_data *hdata, enum ht_flags req) +{ + unsigned long flags; + ht_lock_irqsave(root, flags); + hdata = ht_insert(hts, htree_first_rcu_locked(root), hdata, req); + ht_unlock_irqrestore(root, flags); + return hdata; +} + +static inline __must_check struct htree_data *ht_erase_lock( + struct htree_state *hts, struct htree_root *root, u64 index) +{ + struct htree_data *hdata; + ht_lock(root); + hdata = ht_erase(hts, htree_first_rcu_locked(root), index); + ht_unlock(root); + return hdata; +} + +static inline __must_check struct htree_data *ht_erase_lock_irq( + struct htree_state *hts, struct htree_root *root, u64 index) +{ + struct htree_data *hdata; + ht_lock_irq(root); + hdata = ht_erase(hts, htree_first_rcu_locked(root), index); + ht_unlock_irq(root); + return hdata; +} + +static inline __must_check struct htree_data *ht_erase_lock_irqsave( + struct htree_state *hts, struct htree_root *root, u64 index) +{ + unsigned long flags; + struct htree_data *hdata; + ht_lock_irqsave(root, flags); + hdata = ht_erase(hts, htree_first_rcu_locked(root), index); + ht_unlock_irqrestore(root, flags); + return hdata; +} #endif /* _LINUX_HTREE_H */ diff --git a/lib/htree-test.c b/lib/htree-test.c index 05b60da271de..5bf862706ce2 100644 --- a/lib/htree-test.c +++ b/lib/htree-test.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * htree/htree-api.c + * htree/htree-test.c * Hash-Trees test codes to verify * * Copyright(C) 2024, JaeJoon Jung @@ -17,28 +17,30 @@ Hash Tree API flow ------------------ - *hts = ht_hts_alloc() //alloc hts - ht_hts_clear_init(hts, ...) + DEFINE_HTREE_ROOT(ht_root); //define htree_root - *htree = ht_table_alloc(hts) //alloc first(depth:0) htree + *hts = ht_hts_alloc(); //alloc hts + ht_hts_clear_init(hts, ...); + + htree_root_alloc(hts, &ht_root); //alloc first hash tree run_loop() { - *udata = _data_alloc(index) //alloc udata + *udata = _data_alloc(index); //alloc udata - ht_insert(hts, htree, udata->hdata, ..) - ht_erase(hts, htree, index) - hdata = ht_find(hts, htree, index) - hdata = ht_most_index(hts, htree) + ht_insert_lock(hts, &ht_root, udata->hdata, ..); + ht_erase_lock(hts, &ht_root, index); + hdata = ht_find(hts, ht_root.ht_first, index); + hdata = ht_most_index(hts, ht_root.ht_first); - ht_statis(hts, htree, ...) + ht_statis(hts, ht_root.ht_first, ...); } - htree_erase_all(hts, htree) //remove all udata + htree_erase_all_lock(hts, &ht_root) //remove all udata - ht_destroy(hts, htree) //remove all htree + ht_destroy_lock(hts, &ht_root) //remove all htree - kfree(hts) //remove hts + kfree(hts) //remove hts */ @@ -75,6 +77,8 @@ #define HTREE_TEST_SCHED_CNT 200 +DEFINE_HTREE_ROOT(ht_root); + struct data_struct { /* user defined data members ... */ char a; @@ -361,19 +365,19 @@ static void __htree_debug_walks_all(struct htree_state *hts, /** * htree_walks_all_debug - display to debug all indexes * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * @index: index to find * * this function cycles through all hash tables and outputs all indexes. */ static void htree_debug_walks_all(struct htree_state *hts, - struct hash_tree *htree, u64 index) + struct htree_root *root, u64 index) { pr_ht_debug("[@@@@) walking: sbit:%u, dmax:%u, acnt:%d, dcnt:%llu\n\n", hts->sbit, hts->dmax, hts->acnt, hts->dcnt); hts->dept = 0; - __htree_debug_walks_all(hts, htree, index); + __htree_debug_walks_all(hts, htree_first_rcu(root), index); pr_ht_debug("(@@@@] done: sbit:%u, dmax:%u, acnt:%d, dcnt:%llu\n\n", hts->sbit, hts->dmax, hts->acnt, hts->dcnt); @@ -381,14 +385,14 @@ static void htree_debug_walks_all(struct htree_state *hts, #endif /* HTREE_DEBUG_DETAIL */ /** - * __htree_erase_all - erase udata all + * __htree_erase_all_lock - erase udata all * @hts: htree_state pointer * @htree: hash_tree root pointer * @erased: erased udata count * * this function cycles through all hash tables and erase udata all */ -static void __htree_erase_all(struct htree_state *hts, +static void __htree_erase_all_lock(struct htree_state *hts, struct hash_tree *htree, u64 *erased) { u8 bits, ncnt; @@ -421,7 +425,7 @@ static void __htree_erase_all(struct htree_state *hts, hts->dept++; pnum = anum; /* recursive call */ - __htree_erase_all(hts, _next, erased); + __htree_erase_all_lock(hts, _next, erased); anum = pnum; hts->dept--; } else { @@ -431,13 +435,13 @@ static void __htree_erase_all(struct htree_state *hts, } /** - * htree_erase_all - erase udata all + * htree_erase_all_lock - erase udata all * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * * return: erased all udata count */ -static u64 htree_erase_all(struct htree_state *hts, struct hash_tree *htree) +static u64 htree_erase_all_lock(struct htree_state *hts, struct htree_root *root) { u64 erased = 0; @@ -445,7 +449,10 @@ static u64 htree_erase_all(struct htree_state *hts, struct hash_tree *htree) hts->sbit, hts->dmax, hts->acnt, hts->dcnt); hts->dept = 0; - __htree_erase_all(hts, htree, &erased); + + ht_lock(root); + __htree_erase_all_lock(hts, htree_first_rcu_locked(root), &erased); + ht_unlock(root); pr_ht_info("(~~~~] done: sbit:%u, acnt:%d, dcnt:%llu, erased:%llu\n\n", hts->sbit, hts->acnt, hts->dcnt, erased); @@ -456,7 +463,7 @@ static u64 htree_erase_all(struct htree_state *hts, struct hash_tree *htree) /** * _htree_insert_range - insert udata to hash tree using ht_insert() * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * @start: start index to insert * @end: end index to insert * @gap: gap between indices @@ -466,7 +473,7 @@ static u64 htree_erase_all(struct htree_state *hts, struct hash_tree *htree) * if req is htf_ins, the new udata is inserted next to each other. * if req is htf_erase, the new udata is inserted, and old udata is erased. */ -static u64 _htree_insert_range(struct htree_state *hts, struct hash_tree *htree, +static u64 _htree_insert_range(struct htree_state *hts, struct htree_root *root, u64 start, u64 end, u64 gap, enum ht_flags req) { u64 index; @@ -478,7 +485,7 @@ static u64 _htree_insert_range(struct htree_state *hts, struct hash_tree *htree, start, end, gap); for (index = start; index <= end; index += gap) { udata = _htree_data_alloc(index); - rdata = ht_insert(hts, htree, &udata->hdata, req); + rdata = ht_insert_lock(hts, root, &udata->hdata, req); if (req == htf_erase && rdata) { udata = hlist_entry_safe(rdata, struct data_struct, hdata); if (udata && rdata->index == index) { @@ -500,12 +507,12 @@ static u64 _htree_insert_range(struct htree_state *hts, struct hash_tree *htree, /** * _htree_find_range - find udata in the hash tree using ht_find() * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * @start: start index to find * @end: end index to find * @gap: gap between indices */ -static u64 _htree_find_range(struct htree_state *hts, struct hash_tree *htree, +static u64 _htree_find_range(struct htree_state *hts, struct htree_root *root, u64 start, u64 end, u64 gap) { u64 index; @@ -516,7 +523,7 @@ static u64 _htree_find_range(struct htree_state *hts, struct hash_tree *htree, pr_ht_info("[****) finding: [s:%llu ... e:%llu] (g:%llu)\n", start, end, gap); for (index = start; index <= end; index += gap) { - rdata = ht_find(hts, htree, index); + rdata = ht_find(hts, htree_first_rcu(root), index); if (rdata) { udata = hlist_entry_safe(rdata, struct data_struct, hdata); if (udata && rdata->index == index) { @@ -525,6 +532,7 @@ static u64 _htree_find_range(struct htree_state *hts, struct hash_tree *htree, found++; } } + loop++; if (!(loop % HTREE_TEST_SCHED_CNT)) schedule(); @@ -537,23 +545,25 @@ static u64 _htree_find_range(struct htree_state *hts, struct hash_tree *htree, /** * _htree_erase_range - erase udata from hash tree using ht_erase() * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * @start: start index to erase * @end: end index to erase * @gap: gap between indices */ -static u64 _htree_erase_range(struct htree_state *hts, struct hash_tree *htree, +static u64 _htree_erase_range(struct htree_state *hts, struct htree_root *root, u64 start, u64 end, u64 gap) { u64 index; u64 loop = 0, erased = 0; + struct hash_tree *htree; struct data_struct *udata; struct htree_data *rdata; pr_ht_info("[----) erasing: [s:%llu ... e:%llu] (g:%llu)\n", start, end, gap); for (index = start; index <= end; index += gap) { - rdata = ht_erase(hts, htree, index); + htree = htree_first_rcu(root); + rdata = ht_erase_lock(hts, root, index); if (rdata) { udata = hlist_entry_safe(rdata, struct data_struct, hdata); if (udata && rdata->index == index) { @@ -580,22 +590,24 @@ static u64 _htree_erase_range(struct htree_state *hts, struct hash_tree *htree, /** * _htree_update_range - update udata in the hash tree using ft_find() * @hts: htree_state pointer - * @htree: hash_tree root pointer + * @root: hash_tree root pointer * @start: start index to update * @end: end index to update * @gap: gap between indices */ -static u64 _htree_update_range(struct htree_state *hts, struct hash_tree *htree, +static u64 _htree_update_range(struct htree_state *hts, struct htree_root *root, u64 start, u64 end, u64 gap) { u64 index; u64 loop = 0, updated = 0; + struct hash_tree *htree; struct data_struct *udata; struct htree_data *rdata; pr_ht_info("[####) updating: [s:%llu ... e:%llu] (g:%llu)\n", start, end, gap); for (index = start; index <= end; index += gap) { + htree = htree_first_rcu(root); rdata = ht_find(hts, htree, index); if (rdata) { udata = hlist_entry_safe(rdata, struct data_struct, hdata); @@ -630,14 +642,14 @@ static u64 _htree_update_range(struct htree_state *hts, struct hash_tree *htree, /** * _htree_statis - calculate hash tree statistics and get into hts. * @hts: htree_state pointer to store statistics - * @htree: hash_tree root pointer + * @root: hash_tree root pointer */ -static void _htree_statis(struct htree_state *hts, struct hash_tree *htree) +static void _htree_statis(struct htree_state *hts, struct htree_root *root) { s32 acnt = 0; u64 dcnt = 0; - ht_statis(hts, htree, &acnt, &dcnt); + ht_statis(hts, htree_first_rcu(root), &acnt, &dcnt); if (hts->dcnt == dcnt && hts->acnt == acnt) { pr_ht_info("[ OK ] statist: acnt:%d, dcnt:%llu ", acnt, dcnt); @@ -651,8 +663,10 @@ static void _htree_statis(struct htree_state *hts, struct hash_tree *htree) /** * _htree_statis_info - shows information calculated by htree_statis(). + * @hts: htree_state pointer to read statistics + * @root: hash_tree root pointer */ -static void _htree_statis_info(struct htree_state *hts, struct hash_tree *htree) +static void _htree_statis_info(struct htree_state *hts, struct htree_root *root) { u32 sizh = sizeof(struct hash_tree); u32 sizd = sizeof(struct data_struct); @@ -663,7 +677,7 @@ static void _htree_statis_info(struct htree_state *hts, struct hash_tree *htree) u64 smem = hsum + dsum; if (hts->asum == 0) - _htree_statis(hts, htree); + _htree_statis(hts, root); pr_ht_stat("------------------------------------------\n"); pr_ht_stat(" hash start bits(sbit) : %10d\n", hts->sbit); @@ -692,10 +706,11 @@ static void _htree_statis_info(struct htree_state *hts, struct hash_tree *htree) * if sort flag is HTREE_FLAG_ASCD, root hash table has the smallest index. * if sort flag is HTREE_FLAG_DECD, root hash table has the largest index. */ -static void _htree_get_most_index(struct htree_state *hts, struct hash_tree *htree) +static void _htree_get_most_index(struct htree_state *hts, struct htree_root *root) { struct htree_data *hdata; - hdata = ht_most_index(hts, htree); + + hdata = ht_most_index(hts, htree_first_rcu(root)); if (hdata) { if (hts->sort == HTREE_FLAG_ASCD) { pr_ht_stat("[MOST] smallest index:%llu\n\n", hdata->index); @@ -708,20 +723,20 @@ static void _htree_get_most_index(struct htree_state *hts, struct hash_tree *htr /** * _htree_remove_all - remove all udata and hash trees * - * before run ht_destroy(), the udata must be erased all. - * ht_destroy() removes all hash trees, but it does not remove the udata. + * before run ht_destroy_lock(), the udata must be erased all. + * ht_destroy_lock() removes all hash trees, but it does not remove the udata. */ -static void _htree_remove_all(struct htree_state *hts, struct hash_tree *htree) +static void _htree_remove_all(struct htree_state *hts, struct htree_root *root) { /* remove all udata */ - hts->dcnt -= htree_erase_all(hts, htree); + hts->dcnt -= htree_erase_all_lock(hts, root); if (hts->dcnt != 0) { pr_ht_warn("[WARN] erase remained acnt:%d, dcnt:%llu\n\n", hts->acnt, hts->dcnt); } /* remove all hash trees */ - if (ht_destroy(hts, htree) == htf_ok) { + if (ht_destroy_lock(hts, root) == htf_ok) { pr_ht_stat("[ OK ] destroy remained acnt:%d, dcnt:%llu\n\n", hts->acnt, hts->dcnt); } else { @@ -743,7 +758,6 @@ static void _htree_remove_all(struct htree_state *hts, struct hash_tree *htree) */ static u64 _htree_test_index_loop(struct htree_state *hts, u64 start, u64 end) { - struct hash_tree *htree; u64 inserted, found, erased, updated; u64 dcnt, slice; @@ -752,42 +766,42 @@ static u64 _htree_test_index_loop(struct htree_state *hts, u64 start, u64 end) slice = (end - start) / 10 + 2; /* first root hash tree alloc */ - htree = ht_table_alloc(hts); + htree_root_alloc(hts, &ht_root); - inserted = _htree_insert_range(hts, htree, start, end, 1, htf_ins); + inserted = _htree_insert_range(hts, &ht_root, start, end, 1, htf_ins); if (inserted != hts->dcnt) { pr_ht_err("[FAIL] inserted:%llu, dcnt:%llu, diff:%lld\n\n", inserted, hts->dcnt, inserted - hts->dcnt); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); - erased = _htree_erase_range(hts, htree, start, end, slice); - found = _htree_find_range(hts, htree, start, end, slice); + erased = _htree_erase_range(hts, &ht_root, start, end, slice); + found = _htree_find_range(hts, &ht_root, start, end, slice); if (found) { pr_ht_err("[FAIL] erased:%llu, found:%llu, diff:%lld\n\n", erased, found, erased - found); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); - inserted = _htree_insert_range(hts, htree, start, end, slice, htf_ins); - updated = _htree_update_range(hts, htree, start, end, slice); + inserted = _htree_insert_range(hts, &ht_root, start, end, slice, htf_ins); + updated = _htree_update_range(hts, &ht_root, start, end, slice); if (inserted != updated) { pr_ht_err("[FAIL] inserted:%llu, updated:%llu, diff:%lld\n\n", inserted, updated, inserted - updated); } - _htree_statis(hts, htree); - _htree_get_most_index(hts, htree); + _htree_statis(hts, &ht_root); + _htree_get_most_index(hts, &ht_root); #ifdef HTREE_DEBUG_DETAIL - htree_debug_walks_all(hts, htree, 0); + htree_debug_walks_all(hts, &ht_root, 0); #endif - _htree_statis_info(hts, htree); + _htree_statis_info(hts, &ht_root); dcnt = hts->dcnt; - _htree_remove_all(hts, htree); + _htree_remove_all(hts, &ht_root); return dcnt; } @@ -872,7 +886,6 @@ index type:<%s>, sorting type:<%s>\n", idxts[idx_type], sorts[sort_type]); static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) { u64 i, index; - struct hash_tree *htree; struct data_struct *udata; struct htree_data *rdata; u64 loop = 0, inserted = 0, erased = 0; @@ -886,13 +899,13 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) ht_hts_clear_init(hts, maxnr, idx_type, sort_type); /* first root hash tree alloc */ - htree = ht_table_alloc(hts); + htree_root_alloc(hts, &ht_root); pr_ht_stat("[START) RANDOM: sbit:%u, index type:<%s>, sorting type:<%s>\n\n", hts->sbit, idxts[idx_type], sorts[sort_type]); udata = _htree_data_alloc(check_idx); - rdata = ht_insert(hts, htree, &udata->hdata, htf_ins); + rdata = ht_insert_lock(hts, &ht_root, &udata->hdata, htf_ins); inserted++; loop++; @@ -902,7 +915,7 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) get_random_u32() : get_random_u64(); udata = _htree_data_alloc(index); - rdata = ht_insert(hts, htree, &udata->hdata, htf_ins); + rdata = ht_insert_lock(hts, &ht_root, &udata->hdata, htf_ins); if (!rdata) inserted++; loop++; @@ -910,9 +923,9 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) schedule(); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); - rdata = ht_find(hts, htree, check_idx); + rdata = ht_find(hts, htree_first_rcu(&ht_root), check_idx); if (!rdata) { pr_ht_err("[FAIL] NOT found check index:%llu\n\n", check_idx); } @@ -923,7 +936,7 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) index = (idx_type == HTREE_FLAG_IDX32) ? get_random_u32() : get_random_u64(); - rdata = ht_erase(hts, htree, index); + rdata = ht_erase_lock(hts, &ht_root, index); if (rdata) { udata = hlist_entry_safe(rdata, struct data_struct, hdata); if (udata && rdata->index == index) { @@ -938,9 +951,9 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) schedule(); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); - rdata = ht_find(hts, htree, check_idx); + rdata = ht_find(hts, htree_first_rcu(&ht_root), check_idx); if (!rdata) { pr_ht_info("[INFO] check index:%llu (erased)\n\n", check_idx); } @@ -949,13 +962,13 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) loop, inserted, erased); #ifdef HTREE_DEBUG_DETAIL - htree_debug_walks_all(hts, htree, 0); + htree_debug_walks_all(hts, &ht_root, 0); #endif - _htree_get_most_index(hts, htree); - _htree_statis_info(hts, htree); + _htree_get_most_index(hts, &ht_root); + _htree_statis_info(hts, &ht_root); - _htree_remove_all(hts, htree); + _htree_remove_all(hts, &ht_root); kfree(hts); } @@ -975,7 +988,6 @@ static void _htree_test_idx_random(u8 idx_type, u8 sort_type, u64 maxnr) */ static void _htree_test_index_same(u8 idx_type, u8 sort_type, u64 maxnr) { - struct hash_tree *htree; u64 inserted, found; const char *idxts[] = { "64bits", "32bits" }; const char *sorts[] = { "ASCD", "DECD" }; @@ -987,49 +999,49 @@ static void _htree_test_index_same(u8 idx_type, u8 sort_type, u64 maxnr) ht_hts_clear_init(hts, maxnr, idx_type, sort_type); /* first root hash tree alloc */ - htree = ht_table_alloc(hts); + htree_root_alloc(hts, &ht_root); pr_ht_stat("[START) SAME: sbit:%u, index type:<%s>, sorting type:<%s>\n\n", hts->sbit, idxts[idx_type], sorts[sort_type]); pr_ht_stat("[loop) %llu: new index inserting(htf_ins)\n\n", maxnr); - inserted = _htree_insert_range(hts, htree, 0, maxnr, gap - 1, htf_ins); + inserted = _htree_insert_range(hts, &ht_root, 0, maxnr, gap - 1, htf_ins); if (inserted != hts->dcnt) { pr_ht_err("[FAIL] inserted:%llu, dcnt:%llu, diff:%lld\n\n", inserted, hts->dcnt, inserted - hts->dcnt); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); pr_ht_stat("[loop) %llu: SAME index inserting(htf_erase)\n\n", maxnr); - inserted = _htree_insert_range(hts, htree, 1, maxnr, gap, htf_erase); + inserted = _htree_insert_range(hts, &ht_root, 1, maxnr, gap, htf_erase); if (inserted != 0) { pr_ht_err("[FAIL] inserted:%llu, dcnt:%llu, diff:%lld\n\n", inserted, hts->dcnt, inserted - hts->dcnt); } pr_ht_stat("[loop) %llu: SAME index inserting(htf_ins)\n\n", maxnr); - inserted = _htree_insert_range(hts, htree, 1, maxnr, gap, htf_ins); + inserted = _htree_insert_range(hts, &ht_root, 1, maxnr, gap, htf_ins); if (inserted != (maxnr / gap)) { pr_ht_err("[FAIL] inserted:%llu, dcnt:%llu, diff:%lld\n\n", inserted, hts->dcnt, inserted - hts->dcnt); } - found = _htree_find_range(hts, htree, 0, maxnr, gap - 1); + found = _htree_find_range(hts, &ht_root, 0, maxnr, gap - 1); if (found != (hts->dcnt - inserted)) { pr_ht_err("[FAIL] dcnt:%llu, inserted:%llu, found:%llu\n\n", hts->dcnt, inserted, found); } - _htree_statis(hts, htree); + _htree_statis(hts, &ht_root); #ifdef HTREE_DEBUG_DETAIL - htree_debug_walks_all(hts, htree, 0); + htree_debug_walks_all(hts, &ht_root, 0); #endif - _htree_get_most_index(hts, htree); - _htree_statis_info(hts, htree); + _htree_get_most_index(hts, &ht_root); + _htree_statis_info(hts, &ht_root); - _htree_remove_all(hts, htree); + _htree_remove_all(hts, &ht_root); kfree(hts); } diff --git a/lib/htree.c b/lib/htree.c index be7b34b5d4e1..1fcdb8d69730 100644 --- a/lib/htree.c +++ b/lib/htree.c @@ -180,6 +180,9 @@ struct htree_data *ht_find(struct htree_state *hts, struct htree_data *rdata = NULL; struct hash_tree *rtree; + if (!htree) + return NULL; + if (_ht_find(hts, htree, index, &rdata, &rtree) == htf_find) return rdata; return NULL; @@ -345,6 +348,9 @@ struct htree_data *ht_insert(struct htree_state *hts, struct hash_tree *htree, struct hash_tree *rtree = NULL; enum ht_flags htf; + if (!htree) + return NULL; + htf = _ht_find(hts, htree, hdata->index, &rdata, &rtree); _ht_insert(hts, rtree, rdata, hdata, htf, req); @@ -478,6 +484,9 @@ struct htree_data *ht_erase(struct htree_state *hts, { struct htree_data *rdata = NULL; + if (!htree) + return NULL; + if (_ht_erase(hts, htree, &rdata, index) == htf_erase) return rdata; @@ -533,22 +542,31 @@ static void __ht_free_all(struct htree_state *hts, } /** - * ht_destroy - public function to free hash tree + * ht_destroy_lock - public function to free hash tree * @hts: htree_state pointer - * @htree: hash_tree pointer(root) + * @root: htree_tree pointer(root) * * this function removes all hash tree, but it does not remove udata. */ -enum ht_flags ht_destroy(struct htree_state *hts, struct hash_tree *htree) +enum ht_flags ht_destroy_lock(struct htree_state *hts, struct htree_root *root) { s32 acnt = 0; u64 dcnt = 0; + struct hash_tree *htree; if (hts->acnt == 0 && hts->dcnt == 0) return htf_ok; + htree = htree_first_rcu(root); + if (!htree) + return htf_none; + hts->dept = 0; + + ht_lock(root); __ht_free_all(hts, htree, &acnt, &dcnt); + RCU_INIT_POINTER(root->ht_first, NULL); + ht_unlock(root); hts->acnt -= acnt; hts->dcnt -= dcnt; @@ -556,7 +574,7 @@ enum ht_flags ht_destroy(struct htree_state *hts, struct hash_tree *htree) return (hts->dept == 0 && hts->dcnt == 0 && hts->acnt == 0) ? htf_ok : htf_none; } -EXPORT_SYMBOL(ht_destroy); +EXPORT_SYMBOL(ht_destroy_lock); /** * __ht_statis - private function to call recursively to calculate nodes @@ -613,6 +631,9 @@ void ht_statis(struct htree_state *hts, hts->dept = 0; hts->dmax = 0; + if (!htree) + return; + __ht_statis(hts, htree, acnt, dcnt); } EXPORT_SYMBOL(ht_statis);