From patchwork Wed Jan 31 22:51:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Chamberlain X-Patchwork-Id: 13540081 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A1E83AC14; Wed, 31 Jan 2024 22:51:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706741491; cv=none; b=ryCQbNzfQakr5R9cfKKsCsCjyqwcM/sUPJUffjTNEbPTlFWTr/6aBUK6ogWV3u3g9hFN+YYWGdnhzWDoU3E+UENQaLitvptMsKI2RpYNsVEHumQfn2Zp+CzaJngXv8SwCNm/9Lc1ChQ+0RKYxXJOudrNTnmlRCbdtXazo6mpCOA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706741491; c=relaxed/simple; bh=nUkzR12xy1p2FiXOVo6YSxE3TENbyX/bzQKcNKNH2Jw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c5bMEdkPx0AUhNGzLVGp8Hk7sZh3aB7/P7YqPkD3wg5cguTPFAx7kTDBlF6k5sfUfuSYLIUoYPwQPmwxUu1aGhiY8ba4uBc6bBxAm9/khmAlmhygCJFXUHHAScl9mI9IRTTTPF9k9LUVcRAWS6/Nii5bc48h8ixG6gbqe2N0AtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=RRFaCJ4U; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RRFaCJ4U" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=VHDS8XfeqrJJCoN72ANJz9/5fJVypoPtMt3FLrR4fJQ=; b=RRFaCJ4USpY0kDnPaF2Gol/eCH uSXwm4FGSYKqQL6L2LaiKlthJCk+UXmcoNd1huvU87/ebEx4N9F6NuJ6Xr6wmSsYva88XNpFNk6Jl 8VonDVWqHqa3NtjMQblemeLe2bJBWdLZIn/YwiXRHJ2yXZehrKiXuK6Cl8FowK0RCaEFrzF9medG7 8PgaWqykDIabwdxryzoDPb7BtJQOZ2Xwd6zmtJ5tKsvE5DoyhNEu2vzdTaWxcNJlubQV+gJFxTmtz 5bW2px9kQeGRUnIKNiFdPrrD3gER30HYudTQofdG5jkhHiqrNHXq3m6LCOzlu+ap8tr+BzqIVixew i3855xlQ==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVJQW-00000005kZC-3y0I; Wed, 31 Jan 2024 22:51:28 +0000 From: Luis Chamberlain To: willy@infradead.org, akpm@linux-foundation.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, hare@suse.com, p.raghav@samsung.com, da.gomez@samsung.com, djwong@kernel.org, david@fromorbit.com, mcgrof@kernel.org Subject: [PATCH v2 1/2] test_xarray: add tests for advanced multi-index use Date: Wed, 31 Jan 2024 14:51:24 -0800 Message-ID: <20240131225125.1370598-2-mcgrof@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240131225125.1370598-1-mcgrof@kernel.org> References: <20240131225125.1370598-1-mcgrof@kernel.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Sender: Luis Chamberlain The multi index selftests are great but they don't replicate how we deal with the page cache exactly, which makes it a bit hard to follow as the page cache uses the advanced API. Add tests which use the advanced API, mimicking what we do in the page cache, while at it, extend the example to do what is needed for min order support. Tested-by: Daniel Gomez Signed-off-by: Luis Chamberlain --- lib/test_xarray.c | 164 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 164 insertions(+) diff --git a/lib/test_xarray.c b/lib/test_xarray.c index e77d4856442c..8b23481f0e8f 100644 --- a/lib/test_xarray.c +++ b/lib/test_xarray.c @@ -674,6 +674,169 @@ static noinline void check_multi_store(struct xarray *xa) #endif } +#ifdef CONFIG_XARRAY_MULTI +/* mimics page cache __filemap_add_folio() */ +static noinline void check_xa_multi_store_adv_add(struct xarray *xa, + unsigned long index, + unsigned int order, + void *p) +{ + XA_STATE(xas, xa, index); + unsigned int nrpages = 1UL << order; + + /* users are responsible for index alignemnt to the order when adding */ + XA_BUG_ON(xa, index & (nrpages - 1)); + + xas_set_order(&xas, index, order); + + do { + xas_lock_irq(&xas); + + xas_store(&xas, p); + XA_BUG_ON(xa, xas_error(&xas)); + XA_BUG_ON(xa, xa_load(xa, index) != p); + + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + + XA_BUG_ON(xa, xas_error(&xas)); +} + +/* mimics page_cache_delete() */ +static noinline void check_xa_multi_store_adv_del_entry(struct xarray *xa, + unsigned long index, + unsigned int order) +{ + XA_STATE(xas, xa, index); + + xas_set_order(&xas, index, order); + xas_store(&xas, NULL); + xas_init_marks(&xas); +} + +static noinline void check_xa_multi_store_adv_delete(struct xarray *xa, + unsigned long index, + unsigned int order) +{ + xa_lock_irq(xa); + check_xa_multi_store_adv_del_entry(xa, index, order); + xa_unlock_irq(xa); +} + +/* mimics page cache filemap_get_entry() */ +static noinline void *test_get_entry(struct xarray *xa, unsigned long index) +{ + XA_STATE(xas, xa, index); + void *p; + + rcu_read_lock(); +repeat: + xas_reset(&xas); + p = xas_load(&xas); + if (xas_retry(&xas, p)) + goto repeat; + rcu_read_unlock(); + + return p; +} + +static unsigned long some_val = 0xdeadbeef; +static unsigned long some_val_2 = 0xdeaddead; + +/* mimics the page cache usage */ +static noinline void check_xa_multi_store_adv(struct xarray *xa, + unsigned long pos, + unsigned int order) +{ + unsigned int nrpages = 1UL << order; + unsigned long index, base, next_index, next_next_index; + unsigned int i; + + index = pos >> PAGE_SHIFT; + base = round_down(index, nrpages); + next_index = round_down(base + nrpages, nrpages); + next_next_index = round_down(next_index + nrpages, nrpages); + + check_xa_multi_store_adv_add(xa, base, order, &some_val); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, base + i) != &some_val); + + XA_BUG_ON(xa, test_get_entry(xa, next_index) != NULL); + + /* Use order 0 for the next item */ + check_xa_multi_store_adv_add(xa, next_index, 0, &some_val_2); + XA_BUG_ON(xa, test_get_entry(xa, next_index) != &some_val_2); + + /* Remove the next item */ + check_xa_multi_store_adv_delete(xa, next_index, 0); + + /* Now use order for a new pointer */ + check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); + + check_xa_multi_store_adv_delete(xa, next_index, order); + check_xa_multi_store_adv_delete(xa, base, order); + XA_BUG_ON(xa, !xa_empty(xa)); + + /* starting fresh again */ + + /* let's test some holes now */ + + /* hole at base and next_next */ + check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != NULL); + + check_xa_multi_store_adv_delete(xa, next_index, order); + XA_BUG_ON(xa, !xa_empty(xa)); + + /* hole at base and next */ + + check_xa_multi_store_adv_add(xa, next_next_index, order, &some_val_2); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != NULL); + + for (i = 0; i < nrpages; i++) + XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != &some_val_2); + + check_xa_multi_store_adv_delete(xa, next_next_index, order); + XA_BUG_ON(xa, !xa_empty(xa)); +} +#endif + +static noinline void check_multi_store_advanced(struct xarray *xa) +{ +#ifdef CONFIG_XARRAY_MULTI + unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; + unsigned long end = ULONG_MAX/2; + unsigned long pos, i; + + /* + * About 117 million tests below. + */ + for (pos = 7; pos < end; pos = (pos * pos) + 564) { + for (i = 0; i < max_order; i++) { + check_xa_multi_store_adv(xa, pos, i); + check_xa_multi_store_adv(xa, pos + 157, i); + } + } +#endif +} + static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base) { int i; @@ -1804,6 +1967,7 @@ static int xarray_checks(void) check_reserve(&array); check_reserve(&xa0); check_multi_store(&array); + check_multi_store_advanced(&array); check_get_order(&array); check_xa_alloc(); check_find(&array); From patchwork Wed Jan 31 22:51:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Chamberlain X-Patchwork-Id: 13540082 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1327D3AC12; Wed, 31 Jan 2024 22:51:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706741492; cv=none; b=SR4EDmi8X5zLfzJS8XSJJFRmBVyppwQhotDg6WXzz2eDM+wNYVYF/L9ItoTtXrA54DIbIR/JdTS3jqeF9VKabIdtjSqPZI7ETHNU1aQ2bZIGuXJ7UTr5uZSVtGrKeFq3CmxAmjouy+RB+NCnXRkY9cbOHt5709pl/RxrDLGLXcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706741492; c=relaxed/simple; bh=6LSbqfJqIW9HaFLfqQwA40AtdNrMLdIoKVFrdRBtKF4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EbEt38WkYCgnKbL0WLOLuoJXNj0yDnVbDQB1RkDjU6FSuFqof85YELbSt4uiywlZZdu6ypFDfS4fgPYHMfOP17T8qcNCyr6Lao/2cEgBEwVFwxQZlNDsQgJA5YkePqvRh09hcC5QcifZOkH2iIGZsVo/4s6cwsFtrqEmHWdVPGM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=NmXI91H0; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="NmXI91H0" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=VZT786wUdytPkTuVNcY7ycnwPOjLKqn3LYNeNzmf2tQ=; b=NmXI91H0BewXk5PAb2p7VcWRBJ Er4y9xuUWXKmfOmg1A5hRZJL/GmdGaqZyt+Wy3JEmc6Ch65MSW365/US2qyMPeCVsjyvaDwsKYuqY ALhJ4zco7qdbHL+dfbdDyuw/njFd0vBGVNfwNTpuv6PUhBfHEfiE3g/2Qym5nt3q98RU0UgMdXLPQ /n37ZsZkxZbqI1Z9aLclAYD9pPeE7nUg6d7Q8YXokCInsbtdBWK8dl4HsEkWMqkjiXiMVrEbL8Uie z4QSZ66rSqzh9k5Jz84Qn3nBy45BSnHkxfqsyn5XzHO/YyWMd3FQN4S1X8WBB4alKipfR8xlhC2uK Sf7w58ZA==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rVJQW-00000005kZE-49GO; Wed, 31 Jan 2024 22:51:28 +0000 From: Luis Chamberlain To: willy@infradead.org, akpm@linux-foundation.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, hare@suse.com, p.raghav@samsung.com, da.gomez@samsung.com, djwong@kernel.org, david@fromorbit.com, mcgrof@kernel.org Subject: [PATCH v2 2/2] XArray: add cmpxchg order test Date: Wed, 31 Jan 2024 14:51:25 -0800 Message-ID: <20240131225125.1370598-3-mcgrof@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240131225125.1370598-1-mcgrof@kernel.org> References: <20240131225125.1370598-1-mcgrof@kernel.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Sender: Luis Chamberlain From: Daniel Gomez XArray multi-index entries do not keep track of the order stored once the entry is being marked as used with cmpxchg (conditionally replaced with NULL). Add a test to check the order is actually lost. The test also verifies the order and entries for all the tied indexes before and after the NULL replacement with xa_cmpxchg. Add another entry at 1 << order that keeps the node around and the order information for the NULL-entry after xa_cmpxchg. Signed-off-by: Daniel Gomez Signed-off-by: Luis Chamberlain --- lib/test_xarray.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/lib/test_xarray.c b/lib/test_xarray.c index 8b23481f0e8f..d4e55b4867dc 100644 --- a/lib/test_xarray.c +++ b/lib/test_xarray.c @@ -423,6 +423,59 @@ static noinline void check_cmpxchg(struct xarray *xa) XA_BUG_ON(xa, !xa_empty(xa)); } +static noinline void check_cmpxchg_order(struct xarray *xa) +{ +#ifdef CONFIG_XARRAY_MULTI + void *FIVE = xa_mk_value(5); + unsigned int i, order = 3; + + XA_BUG_ON(xa, xa_store_order(xa, 0, order, FIVE, GFP_KERNEL)); + + /* Check entry FIVE has the order saved */ + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != order); + + /* Check all the tied indexes have the same entry and order */ + for (i = 0; i < (1 << order); i++) { + XA_BUG_ON(xa, xa_load(xa, i) != FIVE); + XA_BUG_ON(xa, xa_get_order(xa, i) != order); + } + + /* Ensure that nothing is stored at index '1 << order' */ + XA_BUG_ON(xa, xa_load(xa, 1 << order) != NULL); + + /* + * Additionally, keep the node information and the order at + * '1 << order' + */ + XA_BUG_ON(xa, xa_store_order(xa, 1 << order, order, FIVE, GFP_KERNEL)); + for (i = (1 << order); i < (1 << order) + (1 << order) - 1; i++) { + XA_BUG_ON(xa, xa_load(xa, i) != FIVE); + XA_BUG_ON(xa, xa_get_order(xa, i) != order); + } + + /* Conditionally replace FIVE entry at index '0' with NULL */ + XA_BUG_ON(xa, xa_cmpxchg(xa, 0, FIVE, NULL, GFP_KERNEL) != FIVE); + + /* Verify the order is lost at FIVE (and old) entries */ + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != 0); + + /* Verify the order and entries are lost in all the tied indexes */ + for (i = 0; i < (1 << order); i++) { + XA_BUG_ON(xa, xa_load(xa, i) != NULL); + XA_BUG_ON(xa, xa_get_order(xa, i) != 0); + } + + /* Verify node and order are kept at '1 << order' */ + for (i = (1 << order); i < (1 << order) + (1 << order) - 1; i++) { + XA_BUG_ON(xa, xa_load(xa, i) != FIVE); + XA_BUG_ON(xa, xa_get_order(xa, i) != order); + } + + xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL); + XA_BUG_ON(xa, !xa_empty(xa)); +#endif +} + static noinline void check_reserve(struct xarray *xa) { void *entry; @@ -1964,6 +2017,7 @@ static int xarray_checks(void) check_xas_erase(&array); check_insert(&array); check_cmpxchg(&array); + check_cmpxchg_order(&array); check_reserve(&array); check_reserve(&xa0); check_multi_store(&array);