From patchwork Mon Feb 19 19:45:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10228751 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BAC3E602B7 for ; Mon, 19 Feb 2018 20:01:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9C0728B83 for ; Mon, 19 Feb 2018 20:01:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9E72A28B88; Mon, 19 Feb 2018 20:01:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F35D228879 for ; Mon, 19 Feb 2018 20:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753757AbeBSUBY (ORCPT ); Mon, 19 Feb 2018 15:01:24 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:44634 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753690AbeBSTqL (ORCPT ); Mon, 19 Feb 2018 14:46:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=3yshD7vPteOiC8glOilSOtB9UPBqlx0kEh1MEKq3i8Y=; b=phdEFgEbyFMLzf0Xyul1UjsJu TvWbLCe+GOLoDnUI93noHMVIX+fVPF4YYhRmGwMaaDGA6jiAAtIBZFS8Wx4nxzfI5E4Sdkx0fi292 h+fXStThnkcaU6DCoeiI4rBELdZo04VDr+4rLmYT6akGss79TtrG7RevKaq7G0v+55hAsOqyr3Qz5 ZXPqcXZF7hfJKG5MxeVnRTNetwVARn9Z0ECsrgbHSDSx7LJc5+PurndU8ENZSp1F1ZQPQRib3LzBa hwhsiSfIv5iCrhRx5l/TtdxUigw+3MiOqt3NQOshEQglVoF805AqJhTltOn4uz1A0Yl+0GDt5NJST mZKHzXw6g==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1enrO6-0001mL-Eh; Mon, 19 Feb 2018 19:46:10 +0000 From: Matthew Wilcox To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v7 18/61] xarray: Add xa_cmpxchg and xa_insert Date: Mon, 19 Feb 2018 11:45:13 -0800 Message-Id: <20180219194556.6575-19-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180219194556.6575-1-willy@infradead.org> References: <20180219194556.6575-1-willy@infradead.org> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Like cmpxchg(), xa_cmpxchg will only store to the index if the current entry matches the old entry. It returns the current entry, which is usually more useful than the errno returned by radix_tree_insert(). For the users who really only want the errno, the xa_insert() wrapper provides a more convenient calling convention. Signed-off-by: Matthew Wilcox --- include/linux/xarray.h | 60 ++++++++++++++++++++++++++++ lib/xarray.c | 71 ++++++++++++++++++++++++++++++++++ tools/testing/radix-tree/xarray-test.c | 10 +++++ 3 files changed, 141 insertions(+) diff --git a/include/linux/xarray.h b/include/linux/xarray.h index 38e290df2ff0..e95ebe2488f9 100644 --- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -218,6 +218,8 @@ struct xarray { void xa_init_flags(struct xarray *, gfp_t flags); void *xa_load(struct xarray *, unsigned long index); void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); +void *xa_cmpxchg(struct xarray *, unsigned long index, + void *old, void *entry, gfp_t); bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t); void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t); void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t); @@ -277,6 +279,34 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag) return xa->xa_flags & XA_FLAGS_TAG(tag); } +/** + * xa_insert() - Store this entry in the XArray unless another entry is + * already present. + * @xa: XArray. + * @index: Index into array. + * @entry: New entry. + * @gfp: Memory allocation flags. + * + * If you would rather see the existing entry in the array, use xa_cmpxchg(). + * This function is for users who don't care what the entry is, only that + * one is present. + * + * Context: Process context. Takes and releases the xa_lock. + * May sleep if the @gfp flags permit. + * Return: 0 if the store succeeded. -EEXIST if another entry was present. + * -ENOMEM if memory could not be allocated. + */ +static inline int xa_insert(struct xarray *xa, unsigned long index, + void *entry, gfp_t gfp) +{ + void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); + if (!curr) + return 0; + if (xa_is_err(curr)) + return xa_err(curr); + return -EEXIST; +} + #define xa_trylock(xa) spin_trylock(&(xa)->xa_lock) #define xa_lock(xa) spin_lock(&(xa)->xa_lock) #define xa_unlock(xa) spin_unlock(&(xa)->xa_lock) @@ -296,9 +326,39 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag) */ void *__xa_erase(struct xarray *, unsigned long index); void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); +void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, + void *entry, gfp_t); void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t); void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t); +/** + * __xa_insert() - Store this entry in the XArray unless another entry is + * already present. + * @xa: XArray. + * @index: Index into array. + * @entry: New entry. + * @gfp: Memory allocation flags. + * + * If you would rather see the existing entry in the array, use __xa_cmpxchg(). + * This function is for users who don't care what the entry is, only that + * one is present. + * + * Context: Any context. Expects xa_lock to be held on entry. May + * release and reacquire xa_lock if the @gfp flags permit. + * Return: 0 if the store succeeded. -EEXIST if another entry was present. + * -ENOMEM if memory could not be allocated. + */ +static inline int __xa_insert(struct xarray *xa, unsigned long index, + void *entry, gfp_t gfp) +{ + void *curr = __xa_cmpxchg(xa, index, NULL, entry, gfp); + if (!curr) + return 0; + if (xa_is_err(curr)) + return xa_err(curr); + return -EEXIST; +} + /* Everything below here is the Advanced API. Proceed with caution. */ /* diff --git a/lib/xarray.c b/lib/xarray.c index 9e50804f168c..a231699d894a 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -937,6 +937,77 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) } EXPORT_SYMBOL(__xa_store); +/** + * xa_cmpxchg() - Conditionally replace an entry in the XArray. + * @xa: XArray. + * @index: Index into array. + * @old: Old value to test against. + * @entry: New value to place in array. + * @gfp: Memory allocation flags. + * + * If the entry at @index is the same as @old, replace it with @entry. + * If the return value is equal to @old, then the exchange was successful. + * + * Context: Process context. Takes and releases the xa_lock. May sleep + * if the @gfp flags permit. + * Return: The old value at this index or xa_err() if an error happened. + */ +void *xa_cmpxchg(struct xarray *xa, unsigned long index, + void *old, void *entry, gfp_t gfp) +{ + XA_STATE(xas, xa, index); + void *curr; + + if (WARN_ON_ONCE(xa_is_internal(entry))) + return XA_ERROR(-EINVAL); + + do { + xas_lock(&xas); + curr = xas_load(&xas); + if (curr == old) + xas_store(&xas, entry); + xas_unlock(&xas); + } while (xas_nomem(&xas, gfp)); + + return xas_result(&xas, curr); +} +EXPORT_SYMBOL(xa_cmpxchg); + +/** + * __xa_cmpxchg() - Store this entry in the XArray. + * @xa: XArray. + * @index: Index into array. + * @old: Old value to test against. + * @entry: New entry. + * @gfp: Memory allocation flags. + * + * You must already be holding the xa_lock when calling this function. + * It will drop the lock if needed to allocate memory, and then reacquire + * it afterwards. + * + * Context: Any context. Expects xa_lock to be held on entry. May + * release and reacquire xa_lock if @gfp flags permit. + * Return: The old entry at this index or xa_err() if an error happened. + */ +void *__xa_cmpxchg(struct xarray *xa, unsigned long index, + void *old, void *entry, gfp_t gfp) +{ + XA_STATE(xas, xa, index); + void *curr; + + if (WARN_ON_ONCE(xa_is_internal(entry))) + return XA_ERROR(-EINVAL); + + do { + curr = xas_load(&xas); + if (curr == old) + xas_store(&xas, entry); + } while (__xas_nomem(&xas, gfp)); + + return xas_result(&xas, curr); +} +EXPORT_SYMBOL(__xa_cmpxchg); + /** * __xa_set_tag() - Set this tag on this entry while locked. * @xa: XArray. diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c index 5defd0b9f85c..d6a969d999d9 100644 --- a/tools/testing/radix-tree/xarray-test.c +++ b/tools/testing/radix-tree/xarray-test.c @@ -84,6 +84,15 @@ void check_xa_shrink(struct xarray *xa) assert(xa_load(xa, 0) == xa_mk_value(0)); } +void check_cmpxchg(struct xarray *xa) +{ + assert(xa_empty(xa)); + assert(!xa_store(xa, 12345678, xa_mk_value(12345678), GFP_KERNEL)); + assert(!xa_cmpxchg(xa, 5, xa_mk_value(5), NULL, GFP_KERNEL)); + assert(xa_erase(xa, 12345678) == xa_mk_value(12345678)); + assert(xa_empty(xa)); +} + void check_multi_store(struct xarray *xa) { unsigned long i, j, k; @@ -149,6 +158,7 @@ void xarray_checks(void) check_xa_shrink(&array); item_kill_tree(&array); + check_cmpxchg(&array); check_multi_store(&array); item_kill_tree(&array); }