From patchwork Thu Mar 20 10:22:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 14023670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6558AC28B30 for ; Thu, 20 Mar 2025 10:28:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12625280002; Thu, 20 Mar 2025 06:28:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D74F280001; Thu, 20 Mar 2025 06:28:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE215280002; Thu, 20 Mar 2025 06:28:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CEFB9280001 for ; Thu, 20 Mar 2025 06:28:21 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 81A021CB6E7 for ; Thu, 20 Mar 2025 10:28:21 +0000 (UTC) X-FDA: 83241554802.25.C399B80 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) by imf25.hostedemail.com (Postfix) with ESMTP id 66F4DA0005 for ; Thu, 20 Mar 2025 10:28:19 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=YkSD8vcU; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of przemyslaw.kitszel@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=przemyslaw.kitszel@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742466499; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=hbyZPnKrr26eNm9NqmbjY39hyQmWr/E8zcZmeaHjT8c=; b=mumbp8l+8NV1BQSy9sdTmcbmH1a9JvIqUqOdKloavSC0GuZBO7EklQWrIdOCZ6V56YUhVW e4C1dimXSNWNNNRRKMKcx8oPD8uTUbHLHB0RU0/JBKpk72Al8cmQB4XQ6FIdBnhVgJSQ/U BV7bqjovEGYQ6StjdP2LJvcpN4eLhXg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742466499; a=rsa-sha256; cv=none; b=j7lO/N5HAd6olk0rMhXb5nroNOzR1tbtCwS7lam29qU8cc+2HUBh+tMiPM03ND1jrgHXGl d++TSDicHxJCvlhoqCtiPnUYGoAI+3O7GkVIi7TOBGLJj6SZc7vi/pSu4EJFj7W74MQeLN nxMOZRupMBNyhNrvYIqbiCB6ogTaKYo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=YkSD8vcU; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of przemyslaw.kitszel@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=przemyslaw.kitszel@intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1742466499; x=1774002499; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Vdb13DtJiXnZxDAs4aDGQFXm6vfhoua21VF7YNCZ5CI=; b=YkSD8vcUcFlFln4lcR47GmsOfI2Ql5fErdAyMmr4RSpI6EbIYmWu/yi1 nhd1Rn6L/ahNfaHNZXKYsTmf9FN2GedqvKjhHwIaMOYwlBAqTyVZfQ58N C56zYe8/jfUmeGppHPIAC8U/FjN/7VjF5Ypg5+u+QML7wntiOObQw+NLo AmySEhMh1jGWZ0kgwtGVzNd/1kgBavXGzA6CpTOyT1Qwuac8z/x5xlQRa ExKASMvqxlfNpDFYRR/R+1Gwc28DgKWr0ZjO/Le2Z1zuw0m/MQ0CRfed+ AiLiqDl0OpZ8AiaA9gKMXQNr8nvldnnKg18mCBMRyT8L4j7Z0RKlXKmcr w==; X-CSE-ConnectionGUID: WV10AJ/PTxWeDC4MddD2Fw== X-CSE-MsgGUID: 6/6V+cgiS+uHQaHUVo210A== X-IronPort-AV: E=McAfee;i="6700,10204,11378"; a="47465136" X-IronPort-AV: E=Sophos;i="6.14,261,1736841600"; d="scan'208";a="47465136" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2025 03:28:18 -0700 X-CSE-ConnectionGUID: Dlzr80dPSyeHdt7kQLuJig== X-CSE-MsgGUID: fwTtqrPESt+LDjzEQfj5pQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,261,1736841600"; d="scan'208";a="146256653" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmviesa002.fm.intel.com with ESMTP; 20 Mar 2025 03:28:14 -0700 Received: from vecna.igk.intel.com (vecna.igk.intel.com [10.123.220.17]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 7E48233E91; Thu, 20 Mar 2025 10:28:12 +0000 (GMT) From: Przemek Kitszel To: Matthew Wilcox Cc: Przemek Kitszel , Michal Swiatkowski , Pierre Riteau , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andy Shevchenko , Dave Hansen Subject: [PATCH] xarray: make xa_alloc_cyclic() return 0 on all success cases Date: Thu, 20 Mar 2025 11:22:19 +0100 Message-Id: <20250320102219.8101-1-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 66F4DA0005 X-Stat-Signature: s78ybpd47yg78zd3yq7qzzxf5bpc7d35 X-HE-Tag: 1742466499-785462 X-HE-Meta: U2FsdGVkX18K30sPuoF2MmEKtUZwPitxKE0UqcS1iicHFAlFFn5KBr+ehLiy4OBQ1taDCUF7fe4LhUafe6YZqHwAV8xR+h3WQeADUKHRtqzTOMbg3a/IsKdWirSO66itQfdWu0+kADdEwqslS2SieGv8ehUQXJW1scUEZteSmPbF2G8lRaLpAG+u7HO4fib6BgXv/7A4b23MRNm58zK+5Ue8cxpT0BZE256aVNg++5AoHtOWw2jbjgpQqhrkGGgubhV55IGCtpNloW3YpG1Y1QSoQ3zyqjHF+rOigYXLvfq3W9xuJPAyuvgOc+hbp/Xc9ZChwRqc1/P6JRSwSFOE00aqIhfqwd1uHjB2XsQUerKWmlyOfTKPxESilCfneO/WMwXsxS3ZTC8V3ClDGuXm5YN58la0oNPPEC08WMNc69zYqIq4oXTBioiDo/wC5xPVJXz6pcUHlikrcw0XEP17qwRzIz5Bn8OjTXPUzJw2WtO8jBqP7dcwxQ6uOE2G7CD1y2L7NNVDx3kmQkJVWVVK3gIYMjzReQwqOoyeJtS/kgd5ipMuZWSiEdBVyoyJnPH3TwkFKUTRwGDakJTGVRWWqfp3koXussJH4LNlrE70siAbi3vp5UTQC4LyG9f+dWg68Hv9CR9vB61PDCB6aUkg5IK7W8xPmbKsqERKfqa50F4PzrxPDfok1KB/HunnuHa3FKTF7IEnjKnI57jT75ks+B8Jw/3y1zxbLYYrr7sTPTCYZ0iVB1ah/9fz0PbO4z9wNZf79y75brel9h9GHBZAfJQLbGQDo/ga5kiF0dW8J/1P7oyKhMO3jsjY5Ny9Bd8OoNfrYMu75yzwMIC3s4GJJuWP/1IvQwJglE+1GVa03OifqPTFb+Pzys2RfjAadfTiAGHGnWiwgK49Nvow9H+zYw46M9MtPfipLb3mX592+wZDts0SNpikP67EV4Pf/CKhHZ0FBnGZcIwQwSLeNJU 3+1JcE4m 1EPjQM0XPmq+XnQqbQPWcmaCTpwVC/2UWdib+ZjxncEliMMTwdOZw7FOmBgalEKd1v4sB/QMEfAKLtEN5VBIa0KUvEnO1zVwbkrUTmxvN+o99dBq2EmkkiR8BwnvwR8ounT9Up2U0nHUSQADmhuMBl/mIqOHTYZHGQpoPExPtrxoO8YxOAFejsT+WVuo1zW+JsV92u0atnHBW7GbVV9/KLXv+9cg3FfD5rk5bLI1eI7dXMUBCcswCJBQhMs3wT27x2UNl56Q9hEDIW3ODTT2sYP7YD+U5sYz7WmzFlGVHDpJrjOQHeVxwJpygDaKTLeNJmb5g6wRH6prx+zun1MmJBZtmhsMnX77mXrFVQy6rVdsIIXNtDJDgsqqYQIqSYmHfIHDb3Tx4loUKqsFUudCayZ3vriP5I9vZr1wfRyXzVcRPEcdxiJy5+fnV3yFvu72z1Q4FdS52ylZcrDzsE80SW2QAB4P8GDyIGsccCcnnMy6UMj0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change xa_alloc_cyclic() to return 0 even on wrap-around. Do the same for xa_alloc_cyclic_irq() and xa_alloc_cyclic_bh(). This will prevent any future bug of treating return of 1 as an error: int ret = xa_alloc_cyclic(...) if (ret) // currently mishandles ret==1 goto failure; If there will be someone interested in when wrap-around occurs, there is still __xa_alloc_cyclic() that behaves as before. For now there is no such user. Suggested-by: Matthew Wilcox Link: https://lore.kernel.org/netdev/Z9gUd-5t8b5NX2wE@casper.infradead.org Signed-off-by: Przemek Kitszel --- CC: Michal Swiatkowski CC: Pierre Riteau CC: Andrew Morton CC: linux-fsdevel@vger.kernel.org CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org Thanks to Andy and Dave for internal review feedback CC: Andy Shevchenko CC: Dave Hansen --- include/linux/xarray.h | 24 +++++++++++++++--------- lib/test_xarray.c | 17 +++++++++++++++-- 2 files changed, 30 insertions(+), 11 deletions(-) diff --git a/include/linux/xarray.h b/include/linux/xarray.h index 0b618ec04115..46eb751fd5df 100644 --- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -965,10 +965,12 @@ static inline int __must_check xa_alloc_irq(struct xarray *xa, u32 *id, * Must only be operated on an xarray initialized with flag XA_FLAGS_ALLOC set * in xa_init_flags(). * + * Note that callers interested in whether wrapping has occurred should + * use __xa_alloc_cyclic() instead. + * * Context: Any context. Takes and releases the xa_lock. May sleep if * the @gfp flags permit. - * Return: 0 if the allocation succeeded without wrapping. 1 if the - * allocation succeeded after wrapping, -ENOMEM if memory could not be + * Return: 0 if the allocation succeeded, -ENOMEM if memory could not be * allocated or -EBUSY if there are no free entries in @limit. */ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry, @@ -981,7 +983,7 @@ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry, err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp); xa_unlock(xa); - return err; + return err < 0 ? err : 0; } /** @@ -1002,10 +1004,12 @@ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry, * Must only be operated on an xarray initialized with flag XA_FLAGS_ALLOC set * in xa_init_flags(). * + * Note that callers interested in whether wrapping has occurred should + * use __xa_alloc_cyclic() instead. + * * Context: Any context. Takes and releases the xa_lock while * disabling softirqs. May sleep if the @gfp flags permit. - * Return: 0 if the allocation succeeded without wrapping. 1 if the - * allocation succeeded after wrapping, -ENOMEM if memory could not be + * Return: 0 if the allocation succeeded, -ENOMEM if memory could not be * allocated or -EBUSY if there are no free entries in @limit. */ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry, @@ -1018,7 +1022,7 @@ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry, err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp); xa_unlock_bh(xa); - return err; + return err < 0 ? err : 0; } /** @@ -1039,10 +1043,12 @@ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry, * Must only be operated on an xarray initialized with flag XA_FLAGS_ALLOC set * in xa_init_flags(). * + * Note that callers interested in whether wrapping has occurred should + * use __xa_alloc_cyclic() instead. + * * Context: Process context. Takes and releases the xa_lock while * disabling interrupts. May sleep if the @gfp flags permit. - * Return: 0 if the allocation succeeded without wrapping. 1 if the - * allocation succeeded after wrapping, -ENOMEM if memory could not be + * Return: 0 if the allocation succeeded, -ENOMEM if memory could not be * allocated or -EBUSY if there are no free entries in @limit. */ static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry, @@ -1055,7 +1061,7 @@ static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry, err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp); xa_unlock_irq(xa); - return err; + return err < 0 ? err : 0; } /** diff --git a/lib/test_xarray.c b/lib/test_xarray.c index 0e865bab4a10..393ffaaf090c 100644 --- a/lib/test_xarray.c +++ b/lib/test_xarray.c @@ -1040,6 +1040,7 @@ static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) unsigned int i, id; unsigned long index; void *entry; + int ret; XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(1), limit, &next, GFP_KERNEL) != 0); @@ -1059,7 +1060,7 @@ static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) else entry = xa_mk_index(i - 0x3fff); XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, entry, limit, - &next, GFP_KERNEL) != (id == 1)); + &next, GFP_KERNEL) != 0); XA_BUG_ON(xa, xa_mk_index(id) != entry); } @@ -1072,15 +1073,27 @@ static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) xa_limit_32b, &next, GFP_KERNEL) != 0); XA_BUG_ON(xa, id != UINT_MAX); XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(base), - xa_limit_32b, &next, GFP_KERNEL) != 1); + xa_limit_32b, &next, GFP_KERNEL) != 0); XA_BUG_ON(xa, id != base); XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(base + 1), xa_limit_32b, &next, GFP_KERNEL) != 0); XA_BUG_ON(xa, id != base + 1); xa_for_each(xa, index, entry) xa_erase_index(xa, index); + XA_BUG_ON(xa, !xa_empty(xa)); + /* check wrap-around return of __xa_alloc_cyclic() */ + next = UINT_MAX; + XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX), + xa_limit_32b, &next, GFP_KERNEL) != 0); + xa_lock(xa); + ret = __xa_alloc_cyclic(xa, &id, xa_mk_index(base), xa_limit_32b, + &next, GFP_KERNEL); + xa_unlock(xa); + XA_BUG_ON(xa, ret != 1); + xa_for_each(xa, index, entry) + xa_erase_index(xa, index); XA_BUG_ON(xa, !xa_empty(xa)); }