From patchwork Tue Dec 12 02:27:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488373 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UQBudszo" Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8FA7B5; Mon, 11 Dec 2023 18:27:55 -0800 (PST) Received: by mail-yb1-xb34.google.com with SMTP id 3f1490d57ef6-db3fa47c2f7so5477929276.0; Mon, 11 Dec 2023 18:27:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348074; x=1702952874; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C+PiVVlt0R1rAcNbEfljLBH21XDutnmdsosj1LGuGRo=; b=UQBudszotTGe06Bk8UCTi8Uma8P8G8Msfc7Sa2OFaBp7+vgjGg1LL0YSqMdZKz1wMm tIv6gvWAI4f/BQsxV3j18nlNp34tj+G0cwrp5/oKlpWpLHhyhU0av2WRiq+yRfG8MxF1 bxPOZQBItoy9c0ybPvigf2Ich+ihh3XO9m2zOxb5TBHPTalUlC21uBe3flLXjQmrs3ba iRC+WJDkgSobVP8IdCkBqztmZUohxB/nY7wnEgiOiqLy5NLCDqPqxka3ytPmPiR2Zlya xi62tG98qrBqO9OxLNT3oYggUi64ss5ntVJM57lO0nGSCSqBIo3KOEONmblBxVmnSVov HxYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348074; x=1702952874; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C+PiVVlt0R1rAcNbEfljLBH21XDutnmdsosj1LGuGRo=; b=XTyeK4oNOLjka8V2Gy6xZUokNAzUJzQuAaqb2v5ZpDxOWNXpL5amT2LTmG+FKepNL6 dN5S9J58zUELkF2+RHydeBd33GJESRFAgcF2rylGegL3JKg0Gb6RzTiNqL9Tu+zHnT7E EwBvQM9PHr82SBJWfaExPIynvoWqPm/1CLQLtPgBb+mJ3W9zHi6NvdepugGif0eGOWon 6ojBt/rWpHvZ56vl8EDxdAMt6bswxjYK6ly7eFY89o+s8IovujemEePLEGuXRpYcIoGk Uu7sC9U9XPKfhUA8cG3r/qY/8oujcw3WMh+CZ1JsBzxXkCy9yf8YKbd8x3V5/0G/dta5 4cnQ== X-Gm-Message-State: AOJu0YwtxDj/+8iYpfc87Zn/4Y4I9rVcfcV6aE2WOqv7RnNxNaMhoq/l wDtiQD0To3YXrANy4+si9ijF2i8QSiEzxw== X-Google-Smtp-Source: AGHT+IGYzhPnMEAMXEWXRuumf6RoaD+DrzMTkOOTF2Jfa9W4CrJeSe04vsvmLiH9e40/y+d/NnxEJA== X-Received: by 2002:a25:3604:0:b0:db9:909c:ab0a with SMTP id d4-20020a253604000000b00db9909cab0amr3469170yba.121.1702348073753; Mon, 11 Dec 2023 18:27:53 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id e195-20020a2569cc000000b00db3fca90d6esm2974453ybc.2.2023.12.11.18.27.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:27:52 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, "David S. Miller" , "H. Peter Anvin" , "James E.J. Bottomley" , "K. Y. Srinivasan" , "Md. Haris Iqbal" , Akinobu Mita , Andrew Morton , Bjorn Andersson , Borislav Petkov , Chaitanya Kulkarni , Christian Brauner , Damien Le Moal , Dave Hansen , David Disseldorp , Edward Cree , Eric Dumazet , Fenghua Yu , Geert Uytterhoeven , Greg Kroah-Hartman , Gregory Greenman , Hans Verkuil , Hans de Goede , Hugh Dickins , Ingo Molnar , Jakub Kicinski , Jaroslav Kysela , Jason Gunthorpe , Jens Axboe , Jiri Pirko , Jiri Slaby , Kalle Valo , Karsten Graul , Karsten Keil , Kees Cook , Leon Romanovsky , Mark Rutland , Martin Habets , Mauro Carvalho Chehab , Michael Ellerman , Michal Simek , Nicholas Piggin , Oliver Neukum , Paolo Abeni , Paolo Bonzini , Peter Zijlstra , Ping-Ke Shih , Rich Felker , Rob Herring , Robin Murphy , Sean Christopherson , Shuai Xue , Stanislaw Gruszka , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Valentin Schneider , Vitaly Kuznetsov , Wenjia Zhang , Will Deacon , Yoshinori Sato , GR-QLogic-Storage-Upstream@marvell.com, alsa-devel@alsa-project.org, ath10k@lists.infradead.org, dmaengine@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-block@vger.kernel.org, linux-bluetooth@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-media@vger.kernel.org, linux-mips@vger.kernel.org, linux-net-drivers@amd.com, linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, linux-sound@vger.kernel.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, mpi3mr-linuxdrv.pdl@broadcom.com, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Date: Mon, 11 Dec 2023 18:27:15 -0800 Message-Id: <20231212022749.625238-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add helpers around test_and_{set,clear}_bit() that allow to search for clear or set bits and flip them atomically. The target patterns may look like this: for (idx = 0; idx < nbits; idx++) if (test_and_clear_bit(idx, bitmap)) do_something(idx); Or like this: do { bit = find_first_bit(bitmap, nbits); if (bit >= nbits) return nbits; } while (!test_and_clear_bit(bit, bitmap)); return bit; In both cases, the opencoded loop may be converted to a single function or iterator call. Correspondingly: for_each_test_and_clear_bit(idx, bitmap, nbits) do_something(idx); Or: return find_and_clear_bit(bitmap, nbits); Obviously, the less routine code people have to write themself, the less probability to make a mistake. Those are not only handy helpers but also resolve a non-trivial issue of using non-atomic find_bit() together with atomic test_and_{set,clear)_bit(). The trick is that find_bit() implies that the bitmap is a regular non-volatile piece of memory, and compiler is allowed to use such optimization techniques like re-fetching memory instead of caching it. For example, find_first_bit() is implemented like this: for (idx = 0; idx * BITS_PER_LONG < sz; idx++) { val = addr[idx]; if (val) { sz = min(idx * BITS_PER_LONG + __ffs(val), sz); break; } } On register-memory architectures, like x86, compiler may decide to access memory twice - first time to compare against 0, and second time to fetch its value to pass it to __ffs(). When running find_first_bit() on volatile memory, the memory may get changed in-between, and for instance, it may lead to passing 0 to __ffs(), which is undefined. This is a potentially dangerous call. find_and_clear_bit() as a wrapper around test_and_clear_bit() naturally treats underlying bitmap as a volatile memory and prevents compiler from such optimizations. Now that KCSAN is catching exactly this type of situations and warns on undercover memory modifications. We can use it to reveal improper usage of find_bit(), and convert it to atomic find_and_*_bit() as appropriate. In some cases concurrent operations with plain find_bit() are acceptable. For example: - two threads running find_*_bit(): safe wrt ffs(0) and returns correct value, because underlying bitmap is unchanged; - find_next_bit() in parallel with set or clear_bit(), when modifying a bit prior to the start bit to search: safe and correct; - find_first_bit() in parallel with set_bit(): safe, but may return wrong bit number; - find_first_zero_bit() in parallel with clear_bit(): same as above. In last 2 cases find_bit() may not return a correct bit number, but it may be OK if caller requires any (not exactly the first) set or clear bit, correspondingly. In such cases, KCSAN may be safely silenced with data_race(). But in most cases where KCSAN detects concurrency people should carefully review their code and likely protect critical sections or switch to atomic find_and_bit(), as appropriate. The 1st patch of the series adds the following atomic primitives: find_and_set_bit(addr, nbits); find_and_set_next_bit(addr, nbits, start); ... Here find_and_{set,clear} part refers to the corresponding test_and_{set,clear}_bit function. Suffixes like _wrap or _lock derive their semantics from corresponding find() or test() functions. For brevity, the naming omits the fact that we search for zero bit in find_and_set, and correspondingly search for set bit in find_and_clear functions. The patch also adds iterators with atomic semantics, like for_each_test_and_set_bit(). Here, the naming rule is to simply prefix corresponding atomic operation with 'for_each'. CC: Bart Van Assche CC: Sergey Shtylyov Signed-off-by: Yury Norov --- include/linux/find.h | 293 +++++++++++++++++++++++++++++++++++++++++++ lib/find_bit.c | 85 +++++++++++++ 2 files changed, 378 insertions(+) diff --git a/include/linux/find.h b/include/linux/find.h index 5e4f39ef2e72..237513356ffa 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -32,6 +32,16 @@ extern unsigned long _find_first_and_bit(const unsigned long *addr1, extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size); extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size); +unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_set_next_bit(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); +unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); +unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); + #ifdef __BIG_ENDIAN unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size); unsigned long _find_next_zero_bit_le(const unsigned long *addr, unsigned @@ -460,6 +470,267 @@ unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size, return bit < start ? bit : size; } +/** + * find_and_set_bit - Find a zero bit and set it atomically + * @addr: The address to base the search on + * @nbits: The bitmap size in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_bit(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, 0); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit(ret, addr)); + + return ret; + } + + return _find_and_set_bit(addr, nbits); +} + + +/** + * find_and_set_next_bit - Find a zero bit and set it, starting from @offset + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * @offset: The bitnumber to start searching at + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap, starting from @offset. + * It's also not guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, offset); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit(ret, addr)); + + return ret; + } + + return _find_and_set_next_bit(addr, nbits, offset); +} + +/** + * find_and_set_bit_wrap - find and set bit starting at @offset, wrapping around zero + * @addr: The first address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns: the bit number for the next clear bit, or first clear bit up to @offset, + * while atomically setting it. If no bits are found, returns @nbits. + */ +static inline +unsigned long find_and_set_bit_wrap(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + unsigned long bit = find_and_set_next_bit(addr, nbits, offset); + + if (bit < nbits || offset == 0) + return bit; + + bit = find_and_set_bit(addr, offset); + return bit < offset ? bit : nbits; +} + +/** + * find_and_set_bit_lock - find a zero bit, then set it atomically with lock + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, 0); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit_lock(ret, addr)); + + return ret; + } + + return _find_and_set_bit_lock(addr, nbits); +} + +/** + * find_and_set_next_bit_lock - find a zero bit and set it atomically with lock + * @addr: The address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the range. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_next_bit_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, offset); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit_lock(ret, addr)); + + return ret; + } + + return _find_and_set_next_bit_lock(addr, nbits, offset); +} + +/** + * find_and_set_bit_wrap_lock - find zero bit starting at @ofset and set it + * with lock, and wrap around zero if nothing found + * @addr: The first address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns: the bit number for the next set bit, or first set bit up to @offset + * If no bits are set, returns @nbits. + */ +static inline +unsigned long find_and_set_bit_wrap_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + unsigned long bit = find_and_set_next_bit_lock(addr, nbits, offset); + + if (bit < nbits || offset == 0) + return bit; + + bit = find_and_set_bit_lock(addr, offset); + return bit < offset ? bit : nbits; +} + +/** + * find_and_clear_bit - Find a set bit and clear it atomically + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and cleared bit, or @nbits if no bits found + */ +static inline unsigned long find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr & GENMASK(nbits - 1, 0); + if (val == 0) + return nbits; + ret = __ffs(val); + } while (!test_and_clear_bit(ret, addr)); + + return ret; + } + + return _find_and_clear_bit(addr, nbits); +} + +/** + * find_and_clear_next_bit - Find a set bit next after @offset, and clear it atomically + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * @offset: bit offset at which to start searching + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the range It's also not + * guaranteed that if @nbits is returned, there's no set bits after @offset. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and cleared bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_clear_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr & GENMASK(nbits - 1, offset); + if (val == 0) + return nbits; + ret = __ffs(val); + } while (!test_and_clear_bit(ret, addr)); + + return ret; + } + + return _find_and_clear_next_bit(addr, nbits, offset); +} + /** * find_next_clump8 - find next 8-bit clump with set bits in a memory region * @clump: location to store copy of found clump @@ -577,6 +848,28 @@ unsigned long find_next_bit_le(const void *addr, unsigned #define for_each_set_bit_from(bit, addr, size) \ for (; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) +/* same as for_each_set_bit() but atomically clears each found bit */ +#define for_each_test_and_clear_bit(bit, addr, size) \ + for ((bit) = 0; \ + (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + +/* same as for_each_set_bit_from() but atomically clears each found bit */ +#define for_each_test_and_clear_bit_from(bit, addr, size) \ + for (; (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) + +/* same as for_each_clear_bit() but atomically sets each found bit */ +#define for_each_test_and_set_bit(bit, addr, size) \ + for ((bit) = 0; \ + (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + +/* same as for_each_clear_bit_from() but atomically clears each found bit */ +#define for_each_test_and_set_bit_from(bit, addr, size) \ + for (; \ + (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + #define for_each_clear_bit(bit, addr, size) \ for ((bit) = 0; \ (bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); \ diff --git a/lib/find_bit.c b/lib/find_bit.c index 32f99e9a670e..c9b6b9f96610 100644 --- a/lib/find_bit.c +++ b/lib/find_bit.c @@ -116,6 +116,91 @@ unsigned long _find_first_and_bit(const unsigned long *addr1, EXPORT_SYMBOL(_find_first_and_bit); #endif +unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_bit); + +unsigned long _find_and_set_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + unsigned long bit; + + do { + bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_next_bit); + +unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit_lock(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_bit_lock); + +unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + unsigned long bit; + + do { + bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit_lock(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_next_bit_lock); + +unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (!test_and_clear_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_clear_bit); + +unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + do { + start = FIND_NEXT_BIT(addr[idx], /* nop */, nbits, start); + if (start >= nbits) + return nbits; + } while (!test_and_clear_bit(start, addr)); + + return start; +} +EXPORT_SYMBOL(_find_and_clear_next_bit); + #ifndef find_first_zero_bit /* * Find the first cleared bit in a memory region. From patchwork Tue Dec 12 02:27:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488374 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ALmaFoSj" Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80B7CD2; Mon, 11 Dec 2023 18:27:58 -0800 (PST) Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-5d8a772157fso44759417b3.3; Mon, 11 Dec 2023 18:27:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348076; x=1702952876; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HmvgTcSgTSYSGQ9OhXSlfMgozAivgkoJ1Cnrtk56obw=; b=ALmaFoSjjUmhUdcoPPCJ5ucJ4Mw6EJ6M6IMpyRxfY2iAdP2Khfsqq2SaP8q74dXX6y 6anMXn+gOkaF7yBmLrVre7Bu1ROrLXmZ0pwcMOcMn9GReTq2Mg14kAjP9KDpo4ZVS+SB EMGIr9H+40dHRtYSl+fP3/8olGDf0syIMY6QZKBe8pzZYnQFqKuWMPtZuZuKRmktxqkA a9dXHkMz1Mejjt+9l1LaeEL9lyGywTbYZYssx5E3ChU3594S3t5HMz1a30HvodRm46O6 o2zMaTdZfDOjfC1iMLdpe4dAVsilcqTwVV72quzkJ5a0nL2xXvl0yWwSyn9vQsmHLNTN iYKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348076; x=1702952876; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HmvgTcSgTSYSGQ9OhXSlfMgozAivgkoJ1Cnrtk56obw=; b=gj73M48Hj4p0M5RA2PDVQrt10Mou5AbGWu4ucRvX/8vRtcKthQa3YFyXlEnWpDEh/Y mbphfSWyKrx51BHR2Dzs8PljtNjAjY1lQYM9q8Bm4RkVkWNdq/GpE3+Sq1Ck+bLgEKw4 hUZGsX1aEEqtjytN6h/dfv4yXA/MPK9EqdW26g/QUOli0EVl6MTc8k/VItOFdiVjGjzW d/KFEzVAGORlvqXDMAyNQ+m/yQ8hwAgyDZzsZYGROsalHb/wpUMGIuCGJUND0vOlNCsG XrNeCVq1QpdlKk5TgZ8wjjVFCXxvSG9uMg+9lUYxxdiEMTc77mmuMZj7bJJZgWeqyZa3 /n4w== X-Gm-Message-State: AOJu0YzXkPiFZgPtzYyz/MaYpC+EF/jphCXGeIOPa0Smv+r7D6ptVHqu KR3vwe0KZt8NpDvxjUsV4hNTyMmYiUI8Cg== X-Google-Smtp-Source: AGHT+IFPs4AvQxtYv4UrJlD5lGh/BMqsDvh12ZnXOoQbI3s9B0hPdqcJ7FPe01OhhQlwKaBiv0CCJg== X-Received: by 2002:a81:a505:0:b0:5d7:1941:aa7 with SMTP id u5-20020a81a505000000b005d719410aa7mr4037805ywg.66.1702348076388; Mon, 11 Dec 2023 18:27:56 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id a200-20020a0dd8d1000000b005d35a952324sm3448999ywe.56.2023.12.11.18.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:27:55 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, "David S. Miller" , "H. Peter Anvin" , "James E.J. Bottomley" , "K. Y. Srinivasan" , "Md. Haris Iqbal" , Akinobu Mita , Andrew Morton , Bjorn Andersson , Borislav Petkov , Chaitanya Kulkarni , Christian Brauner , Damien Le Moal , Dave Hansen , David Disseldorp , Edward Cree , Eric Dumazet , Fenghua Yu , Geert Uytterhoeven , Greg Kroah-Hartman , Gregory Greenman , Hans Verkuil , Hans de Goede , Hugh Dickins , Ingo Molnar , Jakub Kicinski , Jaroslav Kysela , Jason Gunthorpe , Jens Axboe , Jiri Pirko , Jiri Slaby , Kalle Valo , Karsten Graul , Karsten Keil , Kees Cook , Leon Romanovsky , Mark Rutland , Martin Habets , Mauro Carvalho Chehab , Michael Ellerman , Michal Simek , Nicholas Piggin , Oliver Neukum , Paolo Abeni , Paolo Bonzini , Peter Zijlstra , Ping-Ke Shih , Rich Felker , Rob Herring , Robin Murphy , Sean Christopherson , Shuai Xue , Stanislaw Gruszka , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Valentin Schneider , Vitaly Kuznetsov , Wenjia Zhang , Will Deacon , Yoshinori Sato , GR-QLogic-Storage-Upstream@marvell.com, alsa-devel@alsa-project.org, ath10k@lists.infradead.org, dmaengine@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-block@vger.kernel.org, linux-bluetooth@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-media@vger.kernel.org, linux-mips@vger.kernel.org, linux-net-drivers@amd.com, linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, linux-sound@vger.kernel.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, mpi3mr-linuxdrv.pdl@broadcom.com, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops Date: Mon, 11 Dec 2023 18:27:16 -0800 Message-Id: <20231212022749.625238-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add basic functionality test for new API. Signed-off-by: Yury Norov --- lib/test_bitmap.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 65f22c2578b0..277e1ca9fd28 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -221,6 +221,65 @@ static void __init test_zero_clear(void) expect_eq_pbl("", bmap, 1024); } +static void __init test_find_and_bit(void) +{ + unsigned long w, w_part, bit, cnt = 0; + DECLARE_BITMAP(bmap, EXP1_IN_BITS); + + /* + * Test find_and_clear{_next}_bit() and corresponding + * iterators + */ + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + + for_each_test_and_clear_bit(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(w, cnt); + expect_eq_uint(0, bitmap_weight(bmap, EXP1_IN_BITS)); + + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3); + + cnt = 0; + bit = EXP1_IN_BITS / 3; + for_each_test_and_clear_bit_from(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(bitmap_weight(bmap, EXP1_IN_BITS), bitmap_weight(bmap, EXP1_IN_BITS / 3)); + expect_eq_uint(w_part, bitmap_weight(bmap, EXP1_IN_BITS)); + expect_eq_uint(w - w_part, cnt); + + /* + * Test find_and_set{_next}_bit() and corresponding + * iterators + */ + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + cnt = 0; + + for_each_test_and_set_bit(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(EXP1_IN_BITS - w, cnt); + expect_eq_uint(EXP1_IN_BITS, bitmap_weight(bmap, EXP1_IN_BITS)); + + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3); + cnt = 0; + + bit = EXP1_IN_BITS / 3; + for_each_test_and_set_bit_from(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(EXP1_IN_BITS - bitmap_weight(bmap, EXP1_IN_BITS), + EXP1_IN_BITS / 3 - bitmap_weight(bmap, EXP1_IN_BITS / 3)); + expect_eq_uint(EXP1_IN_BITS * 2 / 3 - (w - w_part), cnt); +} + static void __init test_find_nth_bit(void) { unsigned long b, bit, cnt = 0; @@ -1273,6 +1332,8 @@ static void __init selftest(void) test_for_each_clear_bitrange_from(); test_for_each_set_clump8(); test_for_each_set_bit_wrap(); + + test_find_and_bit(); } KSTM_MODULE_LOADERS(test_bitmap); From patchwork Tue Dec 12 02:27:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488375 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="div8NoT3" Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1437114; Mon, 11 Dec 2023 18:28:25 -0800 (PST) Received: by mail-yb1-xb2b.google.com with SMTP id 3f1490d57ef6-dafe04717baso5072473276.1; Mon, 11 Dec 2023 18:28:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348104; x=1702952904; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K5dLrjk8RGdEFK2IC/mT3GdPdKaiYDerbdLTf1ASMu8=; b=div8NoT3hGC25kZssriOXiz4HMqfwc5EWDH2eeYR8CYshwy27Of7YnsHZsBHqUKGQD 7q7xr2LP6cXJ9mRdl6Vb9yXDKlyUSU2rz2UH3Tms2aGmdhD6Kog8v5nXp46bvQTppxvM LCEQFPOWrhBHxMrxbEbEJ1uF78OU8zTR66idqST6Ir2M+b/JHjWTKfPcXReXp02k3zHS w6GKxQ5xTBJz6OTOjzHdE5wJwX99JFVpxBu5hyo7shKnc58KkCW45QxsiPmm5Gn42a/G fMkGy7cC1mMgwGBAfVf6tZK/Q9hVEddiOlyOdJzgjGQjSZWRs/+O5uM+ze1hh2EIGHO5 nePg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348104; x=1702952904; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K5dLrjk8RGdEFK2IC/mT3GdPdKaiYDerbdLTf1ASMu8=; b=aA0Cwq3xj+JSItZXSxBt16hCmfrAFKaDwEnwDai3rqOgBKuVaNw6k6QOmnZJMkOd8o d5vjPXBJhAG7OOYbO1KGytiguuUfJOoIFjUcow52ddn/IHcgE1m06LgpVukvq71b3Dxt s6ZhKY4dfGVH3bUuwZYpVEuaH6ddLcBQ7QQiMIU2eoc4vX7CdOuRtLsg3MFvKzUmBgHI 5B4tumE0icbbLmp9O2fMGiB7Bit0tGUfLwX7v69QUXkKRGuPEoxLWNsLvrGFbDw7DE3Q NGCg6eVOY1VX05cD5OsUKpJ1O4Rixv1Sfd/f0cEljibRcSZxjXEk2P+b71CHwxrgbVY5 XRBw== X-Gm-Message-State: AOJu0YwUNdzjKEDCT1uXtbjHmN7y1vs+i2LLUyJwYTWSJivl9rHcOyLW nZAw/4O+QfjYmfrshiDW6vFt+uiLe8pLLw== X-Google-Smtp-Source: AGHT+IHp1dJVq6AiSkWUXT6teibC9JRWOJxb+fMHX6K4A08kJLb31Z2LLH3UPUp5SLKf8329TyCHhA== X-Received: by 2002:a05:690c:2c01:b0:5df:5d59:6d71 with SMTP id eo1-20020a05690c2c0100b005df5d596d71mr3494936ywb.21.1702348104547; Mon, 11 Dec 2023 18:28:24 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id j62-20020a0dc741000000b005da626a84a2sm3445778ywd.30.2023.12.11.18.28.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:24 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Edward Cree , Martin Habets , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Yury Norov , netdev@vger.kernel.org, linux-net-drivers@amd.com Cc: Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 21/35] sfc: optimize the driver by using atomic find_bit() API Date: Mon, 11 Dec 2023 18:27:35 -0800 Message-Id: <20231212022749.625238-22-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org SFC code traverses rps_slot_map and rxq_retry_mask bit by bit. Simplify it by using dedicated atomic find_bit() functions, as they skip already clear bits. Signed-off-by: Yury Norov Reviewed-by: Edward Cree --- drivers/net/ethernet/sfc/rx_common.c | 4 +--- drivers/net/ethernet/sfc/siena/rx_common.c | 4 +--- drivers/net/ethernet/sfc/siena/siena_sriov.c | 14 ++++++-------- 3 files changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c index d2f35ee15eff..0112968b3fe7 100644 --- a/drivers/net/ethernet/sfc/rx_common.c +++ b/drivers/net/ethernet/sfc/rx_common.c @@ -950,9 +950,7 @@ int efx_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb, int rc; /* find a free slot */ - for (slot_idx = 0; slot_idx < EFX_RPS_MAX_IN_FLIGHT; slot_idx++) - if (!test_and_set_bit(slot_idx, &efx->rps_slot_map)) - break; + slot_idx = find_and_set_bit(&efx->rps_slot_map, EFX_RPS_MAX_IN_FLIGHT); if (slot_idx >= EFX_RPS_MAX_IN_FLIGHT) return -EBUSY; diff --git a/drivers/net/ethernet/sfc/siena/rx_common.c b/drivers/net/ethernet/sfc/siena/rx_common.c index 4579f43484c3..160b16aa7486 100644 --- a/drivers/net/ethernet/sfc/siena/rx_common.c +++ b/drivers/net/ethernet/sfc/siena/rx_common.c @@ -958,9 +958,7 @@ int efx_siena_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb, int rc; /* find a free slot */ - for (slot_idx = 0; slot_idx < EFX_RPS_MAX_IN_FLIGHT; slot_idx++) - if (!test_and_set_bit(slot_idx, &efx->rps_slot_map)) - break; + slot_idx = find_and_set_bit(&efx->rps_slot_map, EFX_RPS_MAX_IN_FLIGHT); if (slot_idx >= EFX_RPS_MAX_IN_FLIGHT) return -EBUSY; diff --git a/drivers/net/ethernet/sfc/siena/siena_sriov.c b/drivers/net/ethernet/sfc/siena/siena_sriov.c index 8353c15dc233..554b799288b8 100644 --- a/drivers/net/ethernet/sfc/siena/siena_sriov.c +++ b/drivers/net/ethernet/sfc/siena/siena_sriov.c @@ -722,14 +722,12 @@ static int efx_vfdi_fini_all_queues(struct siena_vf *vf) efx_vfdi_flush_wake(vf), timeout); rxqs_count = 0; - for (index = 0; index < count; ++index) { - if (test_and_clear_bit(index, vf->rxq_retry_mask)) { - atomic_dec(&vf->rxq_retry_count); - MCDI_SET_ARRAY_DWORD( - inbuf, FLUSH_RX_QUEUES_IN_QID_OFST, - rxqs_count, vf_offset + index); - rxqs_count++; - } + for_each_test_and_clear_bit(index, vf->rxq_retry_mask, count) { + atomic_dec(&vf->rxq_retry_count); + MCDI_SET_ARRAY_DWORD( + inbuf, FLUSH_RX_QUEUES_IN_QID_OFST, + rxqs_count, vf_offset + index); + rxqs_count++; } } From patchwork Tue Dec 12 02:27:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488376 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MUXJt/Hu" Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A426137; Mon, 11 Dec 2023 18:28:33 -0800 (PST) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-5d2d0661a8dso51964567b3.2; Mon, 11 Dec 2023 18:28:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348112; x=1702952912; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9k5eqiGBj5/5DwPtrpE4m/Ucis1bOkRyK/lvgWCjgQY=; b=MUXJt/HuzQG0n1pJk75TJsqno0B2RvcwtJqS6p6vIJlgw1ENOKUWFBLtN9sECruBaA BT7mTAvsKZpNTqJqeoHviKjchN1kfYKsQ4ubVXQdE/KLdbzJ7cep2jg1k73GwscQIcqk gy0oFv+aixeijNbjGhyx6qDshbtGJg6ytqNbfI9Lf6Lv23VsIu4CEU9WU8kXTjsW+Etd 6BTxKdXV2xKa96TsV9wQTIBLNNQ3oszSQcw+k/V7NrruYlaDdYTwxPaAFySc/dgAanTo W/qi3vCWyxQhTRoO1FD8FNdfZ6XQCqv7HcCvZSj1X+4OvRJJ7aetyb+9aPX4gMnA6hCp cDRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348112; x=1702952912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9k5eqiGBj5/5DwPtrpE4m/Ucis1bOkRyK/lvgWCjgQY=; b=ntA55nGPN+V0t5PPS9K4HGaTv8qFmHRbK0iflEpdbhW1vyrEtNLzU3b5YWSoSRStrB TrCHjjPqCNi4vjPIWwWO5Gy9D5x+9AxdIa9MsxXVIjySOkilODoYVhDUnlTc3P8iXlGl kE82tM/kl5+QYZJK17LNV/LLiUCppm/fcUcdqf3nopxM2ZqX86F4X5IdNXKkmAX23dNU JjeYMuKEwmmEdq0YD+8+NKDDbdNsWzuJwwoKHwkwBwvLRpiLBg7pg23gN7WAW5UeANs0 ktImQWH52cCyh7/PSynTxjZAlDBcDIEeKwWINbSpobwCtMcyoG7d0Mv/xFqgH/Le7t3z UqAQ== X-Gm-Message-State: AOJu0YwAeJhOXqdgraCkXo0Hg3mlJtbtt4s39pTb66cMQ0cZRdel/tpr 2HKobexFJ9HzZa3ehRy30cLb4S6NwSRiMg== X-Google-Smtp-Source: AGHT+IGv28WoB4lgSsY3KnWz4m20syHxTtIOJodmzHgYhM/BE/Qtyrxdpfg0dGSVdZSH2dpq9w2ttg== X-Received: by 2002:a81:8245:0:b0:5d7:1940:b399 with SMTP id s66-20020a818245000000b005d71940b399mr4634837ywf.101.1702348112320; Mon, 11 Dec 2023 18:28:32 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id v6-20020a81a546000000b005ca4e49bb54sm3359875ywg.142.2023.12.11.18.28.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:32 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Karsten Keil , netdev@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 26/35] mISDN: optimize get_free_devid() Date: Mon, 11 Dec 2023 18:27:40 -0800 Message-Id: <20231212022749.625238-27-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 get_free_devid() traverses each bit in device_ids in an open-coded loop. Simplify it by using the dedicated find_and_set_bit(). It makes the whole function a nice one-liner, and because MAX_DEVICE_ID is a small constant-time value (63), on 64-bit platforms find_and_set_bit() call will be optimized to: ffs(); test_and_set_bit(). Signed-off-by: Yury Norov --- drivers/isdn/mISDN/core.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/drivers/isdn/mISDN/core.c b/drivers/isdn/mISDN/core.c index ab8513a7acd5..c829c4eac0e2 100644 --- a/drivers/isdn/mISDN/core.c +++ b/drivers/isdn/mISDN/core.c @@ -197,14 +197,9 @@ get_mdevice_count(void) static int get_free_devid(void) { - u_int i; + int i = find_and_set_bit((u_long *)&device_ids, MAX_DEVICE_ID + 1); - for (i = 0; i <= MAX_DEVICE_ID; i++) - if (!test_and_set_bit(i, (u_long *)&device_ids)) - break; - if (i > MAX_DEVICE_ID) - return -EBUSY; - return i; + return i <= MAX_DEVICE_ID ? i : -EBUSY; } int From patchwork Tue Dec 12 02:27:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488377 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QlclOvRa" Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3688E3; Mon, 11 Dec 2023 18:28:36 -0800 (PST) Received: by mail-yb1-xb32.google.com with SMTP id 3f1490d57ef6-db54ec0c7b8so4445108276.0; Mon, 11 Dec 2023 18:28:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348115; x=1702952915; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oOA21j87mR+yiywFqhw/n2ogDgEt85D4G/d7gO/inIA=; b=QlclOvRaQz6AziWYVY4AWjSy2mUfSLrWkjUBpdLcbmh4qdhHNCEMcQ1asJIklKcPsE tFAq0eElAeDofNVgWx5Hx2h9QLi2ViOOqlYiKUnIM2nbzTPsqkNPLYRNOfnCBRG7P06E 6CzXNy2h5YNq3Tp74FflwevvbLakN0JyqDczCr6+fGXDx49MAKYL8ObYA761okuvsWL3 QWJP5bmSa067TF4eGEmFCuS+G9pU8bMop7AUyyjZg9NHImAT8rYfpXeD14b+yGjiBVis Zct32D5ELuLI1wMXlKthdmmD5mVjcg9fCq2myi6yaEU7aRzG5xTHqmjiKzlnH2g/NoKN V1rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348115; x=1702952915; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oOA21j87mR+yiywFqhw/n2ogDgEt85D4G/d7gO/inIA=; b=hWq9Rk6FGOugdKlDX9bKoz+4/DdnaB3/DilTD/j1SN0i/BtRiob734X2NWvgAgqkwX FOtSWUKSsjc2YYLmo1L903KQbtnYbyeEY0eWjvOfQu0o6ntkyCfD3yvo8DOoUoYevFtx BvwNxDhvOkxtORpkIqkczYeeOQqldoRb/ChyAiPW5edsbmpfD7H8RAlEUHP5tSXWCvyR HMBjahL79U071FGW8o7HuI3ODXtgo7grB9GzuYfa3+ohrQTQjeQ1pB2WhiaJJBv0eyWF oXbDegSwWyTvjX7XXYpqug7ReSDVVHWidOKioLsd0oG1jpRNt5LSIYEejlhswIIRuyqO H1bg== X-Gm-Message-State: AOJu0YxMSEvIg2GxzcXqnxhNMnhC3wKcL5F1/S3qZakiJ01jghWmpq6n FbZgkFqsH5BbW+8JNF1buY5rmhuxf67nxA== X-Google-Smtp-Source: AGHT+IGUqMV7iMvGJDiUE74PLSFKooWRU/X/MrocrVF8HkjFRUt5+ftRt7u+FFzqw0cO7NvouT07Xw== X-Received: by 2002:a25:9391:0:b0:db7:d3f9:add7 with SMTP id a17-20020a259391000000b00db7d3f9add7mr3731570ybm.31.1702348114983; Mon, 11 Dec 2023 18:28:34 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id 205-20020a2500d6000000b00dbcafb31da2sm479007yba.31.2023.12.11.18.28.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:34 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Jiri Pirko , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 28/35] ethernet: rocker: optimize ofdpa_port_internal_vlan_id_get() Date: Mon, 11 Dec 2023 18:27:42 -0800 Message-Id: <20231212022749.625238-29-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Optimize ofdpa_port_internal_vlan_id_get() by using find_and_set_bit(), instead of polling every bit from bitmap in a for-loop. Signed-off-by: Yury Norov --- drivers/net/ethernet/rocker/rocker_ofdpa.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c index 826990459fa4..449be8af7ffc 100644 --- a/drivers/net/ethernet/rocker/rocker_ofdpa.c +++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c @@ -2249,14 +2249,11 @@ static __be16 ofdpa_port_internal_vlan_id_get(struct ofdpa_port *ofdpa_port, found = entry; hash_add(ofdpa->internal_vlan_tbl, &found->entry, found->ifindex); - for (i = 0; i < OFDPA_N_INTERNAL_VLANS; i++) { - if (test_and_set_bit(i, ofdpa->internal_vlan_bitmap)) - continue; + i = find_and_set_bit(ofdpa->internal_vlan_bitmap, OFDPA_N_INTERNAL_VLANS); + if (i < OFDPA_N_INTERNAL_VLANS) found->vlan_id = htons(OFDPA_INTERNAL_VLAN_ID_BASE + i); - goto found; - } - - netdev_err(ofdpa_port->dev, "Out of internal VLAN IDs\n"); + else + netdev_err(ofdpa_port->dev, "Out of internal VLAN IDs\n"); found: found->ref_count++; From patchwork Tue Dec 12 02:27:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488378 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CA+8/nLn" Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F797E5; Mon, 11 Dec 2023 18:28:39 -0800 (PST) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5c08c47c055so50953537b3.1; Mon, 11 Dec 2023 18:28:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348118; x=1702952918; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6wHqKQ4J5MWfkES8HBul/HTYQa0latLedvfZPgLN9zs=; b=CA+8/nLn7qwjjhRUs0p+i20R/fxU3WAFskgA0B5Hed7Fo+S0+u3Q2oZcjCgBoiU+wC gXBgADrZSr8NVvmzx6qXpUu8QRLiE0Fno2qg0dgykS6elyJBnmDmHX1/y3Aj/TMyLjEc wpdAhNSr0r7+VveEGAAf8Nn4+BKfvk9h3f/tN/9vkDprCVkkOP7F4tVVjPmxAv9mXiaT 6AJx+2Cl5hxwtfh6d92OATvzQWurimfbp7JmIMD5y4oCzA0huJEfaGkwZ+ockFJBkJdV mACRRAgG0tLN4Qg3Gt7pEIs2tD/g8yrnFm3PtO2hopoWuyaBYX5RJGzZLeND77q6gMIL oyrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348118; x=1702952918; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6wHqKQ4J5MWfkES8HBul/HTYQa0latLedvfZPgLN9zs=; b=HVUJaCQHf8U8eZIXUNS14i8rWKLCDph/5ChEuRTTFazHwdfV8cUkXe1RIAyNst4mEZ XjdbBwIB6wOFXx2olQ71e3ZY0gREE9yHx4xMBlvvh3IlakX4KSeB4kvHnST6pv6ldQwL LKB7zKu/ihzai4X/tA7aCMgxeT2k+kzdvFc6Bjh2p8UkgFL9wZXHj1MPDMTgsnE8X2rq 9AofpjHFuyXQZhJEmI+BnfdaXMQHqUh23iAfM5Rwvj+7DYhw4qZw2NgV/cXn2EoHubiQ 44ysm3f0OoAP86JCZK1QfMFDp4KG2wxVOt2hdFwmmOSDVfpPGAQXT0+kNMjmWFPnpsUc n9/g== X-Gm-Message-State: AOJu0YyXpigCPMJofzOcCetI4/bBbYoNEjQA2RZpQytHg7NwonimZe82 hrM3fws5N/WbehnGM02lEt4+kbKRAghogg== X-Google-Smtp-Source: AGHT+IFDkzjQxovvwSCVNmXa4FTMKe00DpcwyrrEysKmRBTHueuZeEqxsuWThC/dBoTprqgokMUJUw== X-Received: by 2002:a05:690c:4605:b0:5e1:7129:7cc3 with SMTP id gw5-20020a05690c460500b005e171297cc3mr1173065ywb.26.1702348118353; Mon, 11 Dec 2023 18:28:38 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id b138-20020a0dd990000000b005e1ed51ea27sm89835ywe.98.2023.12.11.18.28.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:37 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Karsten Keil , Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , Yury Norov , netdev@vger.kernel.org, linux-bluetooth@vger.kernel.org Cc: Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 30/35] bluetooth: optimize cmtp_alloc_block_id() Date: Mon, 11 Dec 2023 18:27:44 -0800 Message-Id: <20231212022749.625238-31-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Instead of polling every bit in blockids, use a dedicated find_and_set_bit(), and make the function a simple one-liner. Signed-off-by: Yury Norov --- net/bluetooth/cmtp/core.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c index 90d130588a3e..b1330acbbff3 100644 --- a/net/bluetooth/cmtp/core.c +++ b/net/bluetooth/cmtp/core.c @@ -88,15 +88,9 @@ static void __cmtp_copy_session(struct cmtp_session *session, struct cmtp_connin static inline int cmtp_alloc_block_id(struct cmtp_session *session) { - int i, id = -1; + int id = find_and_set_bit(&session->blockids, 16); - for (i = 0; i < 16; i++) - if (!test_and_set_bit(i, &session->blockids)) { - id = i; - break; - } - - return id; + return id < 16 ? id : -1; } static inline void cmtp_free_block_id(struct cmtp_session *session, int id) From patchwork Tue Dec 12 02:27:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488379 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hHYMCV1F" Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7736EA; Mon, 11 Dec 2023 18:28:40 -0800 (PST) Received: by mail-ot1-x32e.google.com with SMTP id 46e09a7af769-6d7fa93afe9so3750358a34.2; Mon, 11 Dec 2023 18:28:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348119; x=1702952919; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uEOzzCmNkZ5d9Lsb5EU/wd0mwIpfakhGSYCXk0VOfrE=; b=hHYMCV1FcUGDHrfWKkeQiwL5Fab7+g5uLY651+dYZbJi70sHmQJ91SOk7DSESnuIoa vQV3L0YxxT2OT3MyOXFzLR3ah5SGAjxmJSc2oa3HRXnQAfriOL3pCZtXQAaZO7b+7sV5 WJXX26C+RwdjAevT9mmt9Go8DDJGziIA8xvK8jCCR6xbh7KaDxhKUn5e4HcwH1YlwKtH m+j9G/y10HvWOAZJdNoitlsRGjGXjCOmhVBDbTCR6ensUvaEGO6tD2hdxyJSEETZOwBh RFozTAbJ++dqRyKdbEnZGhArJhLERH9ntK6d3aPdzFXAO5mCqg62ms7TQdlGYUOjmObb E0Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348119; x=1702952919; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uEOzzCmNkZ5d9Lsb5EU/wd0mwIpfakhGSYCXk0VOfrE=; b=Y3BpQfskiIfVTej0K63BDCU8GcIQm+wCcgl5lSkA3UZUZItuqT/c9K7Tn59KJ9vNY9 1LwKSEhg/rrppBSPrAp7abYv4p0cofR2oCfH4/na2vcv9hyFGuUxDJIkQSxF3dyYW3yA +C2Nlhpu7dXMFdtcdVS2hPup8U5+LLgcxs1cV0N6rSKgfY05XoApPjHy+cHIKvbT2zag 63jItXwwDb/5HmLghPHkEtrPdIP5UXjHGfYPIbXnNO4LeLeU6BLV6Oe8V35gVQUUx6BR MNkXgss3nsNuPCdgYTPxiJlExuauSdIS1hgeNuBl28f1wuDLISnG8Zus81zl6+snhSJy Q3cw== X-Gm-Message-State: AOJu0Yxh40xz415i7fdwj3zpeNxS4xBKRfxKW+75NxM7WJeWxN4LBGHG 8dBLTlGaeWznMWR5J+4dbQfTWJSSGzcrCQ== X-Google-Smtp-Source: AGHT+IGS1JDKxXZkdy+V57D9INseET5dn8eBZ7VIB4tOZj8XSJDFgApPFNydqZo+uLxUbeCSlTo6IA== X-Received: by 2002:a9d:7a57:0:b0:6d9:d6f9:359f with SMTP id z23-20020a9d7a57000000b006d9d6f9359fmr4288975otm.53.1702348119498; Mon, 11 Dec 2023 18:28:39 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id d64-20020a0df443000000b0059f766f9750sm3434546ywf.124.2023.12.11.18.28.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:39 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Karsten Graul , Wenjia Zhang , Jan Karcher , "D. Wythe" , Tony Lu , Wen Gu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-s390@vger.kernel.org, netdev@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov , Alexandra Winter Subject: [PATCH v3 31/35] net: smc: optimize smc_wr_tx_get_free_slot_index() Date: Mon, 11 Dec 2023 18:27:45 -0800 Message-Id: <20231212022749.625238-32-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Simplify the function by using find_and_set_bit() and make it a simple almost one-liner. While here, drop explicit initialization of *idx, because it's already initialized by the caller in case of ENOLINK, or set properly with ->wr_tx_mask, if nothing is found, in case of EBUSY. CC: Tony Lu Signed-off-by: Yury Norov Reviewed-by: Alexandra Winter Reviewed-by: Wen Gu --- net/smc/smc_wr.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c index 0021065a600a..b6f0cfc52788 100644 --- a/net/smc/smc_wr.c +++ b/net/smc/smc_wr.c @@ -170,15 +170,11 @@ void smc_wr_tx_cq_handler(struct ib_cq *ib_cq, void *cq_context) static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx) { - *idx = link->wr_tx_cnt; if (!smc_link_sendable(link)) return -ENOLINK; - for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) { - if (!test_and_set_bit(*idx, link->wr_tx_mask)) - return 0; - } - *idx = link->wr_tx_cnt; - return -EBUSY; + + *idx = find_and_set_bit(link->wr_tx_mask, link->wr_tx_cnt); + return *idx < link->wr_tx_cnt ? 0 : -EBUSY; } /**