From patchwork Thu Oct 26 13:59:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13437601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CEE54C25B48 for ; Thu, 26 Oct 2023 13:59:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=bBn/EFLqiEOq3qB+hLKKdQpOfq7KSXcZaW41U8uq+rs=; b=jxX r0zZR6aeI1ZxHZuLeUEku1bt6nfHidBG4FqphpE9VH/dhOsgn315FYEcJEKQu3h+SOwly0ph6ZOwX lxD5XurFm/2pi+oPiehjQT3GfrYRgjXlmK3VVuU8rwi8Jy5VyN/XisFRGgJndaidJcyBv3MAREVfD wYgtewleeIsObLXkrpk0DT/ncrtd18ZTotjin7dtsGHi4MXq89KU21vFcw63xaxlZeJf2Zz6HpxpK eVFIJ+DBATp4rINC+nk+lKpBJWarKKqCElAW8EokDoz5+FF8Vlfbi4/ah+pSsbeogHiu8bKOAwrVi DNP5iQvTIRq/T6GwMAeiPpruEWVLEzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qw0tQ-00EbVy-1B; Thu, 26 Oct 2023 13:59:24 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qw0tN-00EbUf-0A for linux-arm-kernel@lists.infradead.org; Thu, 26 Oct 2023 13:59:22 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5af16e00fadso4579857b3.0 for ; Thu, 26 Oct 2023 06:59:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698328757; x=1698933557; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=dAvhcHsUDf2ItWBwevxZ96vnhxr2cV7fa3pJ8laFriQ=; b=4BE+inxYnP4kzQU+Ri69/tFKcdG13LqaJMhP53dpJZlMWaN0q8O28IpoZVxDEURiLr m0GSMk2TlC2Z/Bu3bx5zgRiVom7YW+eRtaLNBWtdT7eRRrKeK15BOVaPl1uEju8MAfFe OtrqHXpRM9Lrrv1dBqrmfCSizlPmMISPevKrfmzWHZfgK2/YHkon4MtBHlh5UtfsYJEn 5MkBXvvTMyCTvpb2xfWsBcobhlArpqGgnfGuwKzmPLCLFkP7NAURwJRMMM/EPN1qP9zv wIe8QKen6lwBWLbxClDZ58r4xreE21YKke5OudyV5i7J+8/ZmMfwa0lP1scx44LaaSag EeuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698328757; x=1698933557; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=dAvhcHsUDf2ItWBwevxZ96vnhxr2cV7fa3pJ8laFriQ=; b=vbnj/5SD5ojX26EwhKCjqaF7B+nYpTlo6JtVnPOCQI8soXLdhzGy8SaIiINEEPQFD/ QLVwNe+ujGaOhQkX9rrJ9LglxnU2nnCDrBoRm3SEUNLhLXwolEDdHPpY1pxjUqTqkAYk uIfmt36nKnLYSuqz9UFoT2E3l+F3z39NS2Qfde3Qmmt99FtZw/RxncUXVmIlThlNvr8K +IZGfss/yRzaTifib5aS85gG96GyGZc4JVvFESRjRkK0KVnQISu33tJitAMbzOP1XtRK jbegOdg/MzQcXVMxkg6vd4O6zpwrB49NybTNj1B+tv6oFLZ6MbZ8+DYIBghyKK5dQO38 63fQ== X-Gm-Message-State: AOJu0YyKzD/ich7tBvfzIU7jqTcSXh9ZX0XI0R2UMcTfAkjyDiCFgzEu 6sGco68pyM7VauUx9r5FRxoqHdGwnnQ= X-Google-Smtp-Source: AGHT+IHmqu+xm52i+2qdyGMYMMhJu9qdDuWVKux5zs41eCdBdXh+8OqMMbqhiOis7A+oMvvNTcqv3IleSUw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:6c48:d4b4:c637:b013]) (user=glider job=sendgmr) by 2002:a0d:dfc3:0:b0:5a8:28e5:ca68 with SMTP id i186-20020a0ddfc3000000b005a828e5ca68mr416013ywe.5.1698328756969; Thu, 26 Oct 2023 06:59:16 -0700 (PDT) Date: Thu, 26 Oct 2023 15:59:11 +0200 Mime-Version: 1.0 X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231026135912.1214302-1-glider@google.com> Subject: [PATCH v10 1/2] lib/bitmap: add bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org, Arnd Bergmann X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231026_065921_095996_F3CA7C15 X-CRM114-Status: GOOD ( 19.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Syed Nayyar Waris The two new functions allow reading/writing values of length up to BITS_PER_LONG bits at arbitrary position in the bitmap. The code was taken from "bitops: Introduce the for_each_set_clump macro" by Syed Nayyar Waris with a number of changes and simplifications: - instead of using roundup(), which adds an unnecessary dependency on , we calculate space as BITS_PER_LONG-offset; - indentation is reduced by not using else-clauses (suggested by checkpatch for bitmap_get_value()); - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read() and bitmap_write(); - some redundant computations are omitted. Cc: Arnd Bergmann Signed-off-by: Syed Nayyar Waris Signed-off-by: William Breathitt Gray Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338.1592224129.git.syednwaris@gmail.com/ Suggested-by: Yury Norov Co-developed-by: Alexander Potapenko Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko --- This patch was previously part of the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/) This patch was previously called "lib/bitmap: add bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-2-glider@google.com/) v10: - update comments as requested by Andy Shevchenko v8: - as suggested by Andy Shevchenko, handle reads/writes of more than BITS_PER_LONG bits, add a note for 32-bit systems v7: - Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes: - update code comments; - get rid of GENMASK(); - s/assign_bit/__assign_bit; - more vertical whitespace for better readability; - more compact code for bitmap_write() (now for real) v6: - As suggested by Yury Norov, do not require bitmap_read(..., 0) to return 0. v5: - Address comments by Yury Norov: - updated code comments and patch title/description - replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits) - more compact bitmap_write() implementation v4: - Address comments by Andy Shevchenko and Yury Norov: - prevent passing values >= 64 to GENMASK() - fix commit authorship - change comments - check for unlikely(nbits==0) - drop unnecessary const declarations - fix kernel-doc comments - rename bitmap_{get,set}_value() to bitmap_{read,write}() --- include/linux/bitmap.h | 78 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 78 insertions(+) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 03644237e1efb..f5745b505a194 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -77,6 +77,10 @@ struct device; * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start + * bitmap_read(map, start, nbits) Read an nbits-sized value from + * map at start + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to + * map at start * * Note, bitmap_zero() and bitmap_fill() operate over the region of * unsigned longs, that is, bits behind bitmap till the unsigned long @@ -599,6 +603,80 @@ static inline void bitmap_set_value8(unsigned long *map, unsigned long value, map[index] |= value << offset; } +/** + * bitmap_read - read a value of n-bits from the memory region + * @map: address to the bitmap memory region + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG + * + * Returns: value of @nbits bits located at the @start bit offset within the + * @map memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return + * value is undefined. + */ +static inline unsigned long bitmap_read(const unsigned long *map, + unsigned long start, + unsigned long nbits) +{ + size_t index = BIT_WORD(start); + unsigned long offset = start % BITS_PER_LONG; + unsigned long space = BITS_PER_LONG - offset; + unsigned long value_low, value_high; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return 0; + + if (space >= nbits) + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits); + + value_low = map[index] & BITMAP_FIRST_WORD_MASK(start); + value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits); + return (value_low >> offset) | (value_high << space); +} + +/** + * bitmap_write - write n-bit value within a memory region + * @map: address to the bitmap memory region + * @value: value to write, clamped to nbits + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG. + * + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(), + * i.e. bits beyond @nbits are ignored: + * + * for (bit = 0; bit < nbits; bit++) + * __assign_bit(start + bit, bitmap, val & BIT(bit)); + * + * For @nbits > BITS_PER_LONG no writes are performed. + */ +static inline void bitmap_write(unsigned long *map, + unsigned long value, + unsigned long start, unsigned long nbits) +{ + size_t index; + unsigned long offset; + unsigned long space; + unsigned long mask; + bool fit; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return; + + mask = BITMAP_LAST_WORD_MASK(nbits); + value &= mask; + offset = start % BITS_PER_LONG; + space = BITS_PER_LONG - offset; + fit = space >= nbits; + index = BIT_WORD(start); + + map[index] &= (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(start)); + map[index] |= value << offset; + if (fit) + return; + + map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits); + map[index + 1] |= (value >> space); +} + #endif /* __ASSEMBLY__ */ #endif /* __LINUX_BITMAP_H */ From patchwork Thu Oct 26 13:59:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13437602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04B50C25B67 for ; Thu, 26 Oct 2023 13:59:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mUMV2wmzYUlVTAunTvulZBQzdEK9jxyjfVl29E1TjCk=; b=dvYB7uXvz6WMB3gPztNX0WwHKA SK78yI9bmdOgeLbBBukUenexXAzHVwmm8UJ4HdyuDkoP+wqg+gbrYUw3Ipeec154AimNDuCQ+Cnn5 SRjF4Db561XFy6M3eudcQJfXGeXxyjmRjDhH3jTqDsqgr9ETgo1KZ+sRfYOGJZCgi8BMKL3iuS1j3 al1Y2tKrFgfHGy2/UiT77nJ4t0vqu8Gj/sS89fyVdWa9gy9D/5r1Qw4B8Y9hl3c8YoFU3DErl0ohu kvcEmQDjqhF90fiZhNCbAc81VHCaVo1kDQt9unm/GoawWoDWiS1AI0ykSvUE5XdcOlL/LUlkY7z73 WPVWJaEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qw0tY-00EbXN-0L; Thu, 26 Oct 2023 13:59:32 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qw0tO-00EbUl-1o for linux-arm-kernel@lists.infradead.org; Thu, 26 Oct 2023 13:59:24 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-32f5b22e806so500745f8f.0 for ; Thu, 26 Oct 2023 06:59:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698328759; x=1698933559; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qB8yrxWHAh81UTz2RSBiyIh2HT/FiKCzYUt6+8G2cno=; b=LeWiZChAy3aD2d0eVWHeDEWFsK0kYyN/qbUkR4uYQd8ty1tGU3nE8UC5gs2g7iCbiL RbVbrkam8PL2p5InuKEPlFJVZ33s6T62EnXaPQW3juAo6OV2AX1R5Y2wIq+Se4wmq9J6 yNv1Ishv8e0vXj57Nle6aYh+RfxWW0pfFDpdFyjHGhx08E4NtDpCrE4yHBsa15QTsrG7 SHNdSvEoYahoGeSb17O7aoZHs4VhqeM9Y/CIrKzWGgvH2u+2dmMEJN/iyoA7MlZZHycr Z2gJQexD6zGBTppuDGLHtvG9T4pTA+2HG72WBqSKFK8gyG93zfLL2o6q1N+Hr9RSnLi8 uBcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698328759; x=1698933559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qB8yrxWHAh81UTz2RSBiyIh2HT/FiKCzYUt6+8G2cno=; b=D7u2BxjQ/Fr8JBUDrtmyfW5/XexKKq6OjKh8HuKGCsa0o/8Z2Q5Ho8C66IKUGOh0of clqGTGz0x51Dxh2w+l3Auo6iXswAqdFzY1Q1Rfn6LqHpnpGt0nQ78VF5Gfz/9UhJkgkv sxqi5QnpOkk9UUkwKeafW7TkQFEU6NCCSjG+awMNNdpl4XQceZkZ+EK/+5uEPLL+T0VI KKA3YPoPLOqOPqyELeGHTOEwrHsY7CnhocTY9L9ReDddJf9sd43ApKXKStgJEwNDuh2m XSGxfOtuYAmPkcZrPGD4o+DOP4+nxpbAdu8c88Wszul8OfCpvL4eezm3vYC05Y+fVq4U ltGw== X-Gm-Message-State: AOJu0YzRe8LsTOX5W4mJlX/Lt8QNaqzLtzQkhfaSB0saSRJXZL6uJq+M nyVNEeUbfUMSpCcO3XfPVTpKw1l5L8U= X-Google-Smtp-Source: AGHT+IG7K8BIYtb5/ElUzVcWXrYaYc8tpbmfWehLzo40hQg/LRaUkAYRy2k4aAfRHFTAZ/FfUeb4aiyJIuw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:6c48:d4b4:c637:b013]) (user=glider job=sendgmr) by 2002:a5d:54d2:0:b0:32d:974b:6de2 with SMTP id x18-20020a5d54d2000000b0032d974b6de2mr167450wrv.0.1698328759543; Thu, 26 Oct 2023 06:59:19 -0700 (PDT) Date: Thu, 26 Oct 2023 15:59:12 +0200 In-Reply-To: <20231026135912.1214302-1-glider@google.com> Mime-Version: 1.0 References: <20231026135912.1214302-1-glider@google.com> X-Mailer: git-send-email 2.42.0.758.gaed0368e0e-goog Message-ID: <20231026135912.1214302-2-glider@google.com> Subject: [PATCH v10 2/2] lib/test_bitmap: add tests for bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231026_065922_601449_29158973 X-CRM114-Status: GOOD ( 23.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add basic tests ensuring that values can be added at arbitrary positions of the bitmap, including those spanning into the adjacent unsigned longs. Two new performance tests, test_bitmap_read_perf() and test_bitmap_write_perf(), can be used to assess future performance improvements of bitmap_read() and bitmap_write(): [ 0.431119][ T1] test_bitmap: Time spent in test_bitmap_read_perf: 615253 [ 0.433197][ T1] test_bitmap: Time spent in test_bitmap_write_perf: 916313 (numbers from a Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz machine running QEMU). Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko --- This patch was previously part of the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/) This patch was previously called "lib/test_bitmap: add tests for bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/) and "lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned" (https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/) v9: - use WRITE_ONCE() to prevent optimizations in test_bitmap_read_perf() - update patch description v8: - as requested by Andy Shevchenko, add tests for reading/writing sizes > BITS_PER_LONG v7: - as requested by Yury Norov, add performance tests for bitmap_read() and bitmap_write() v6: - use bitmap API to initialize test bitmaps - as requested by Yury Norov, do not check the return value of bitmap_read(..., 0) - fix a compiler warning on 32-bit systems v5: - update patch title - address Yury Norov's comments: - rename the test cases - factor out test_bitmap_write_helper() to test writing over different background patterns; - add a test case copying a nontrivial value bit-by-bit; - drop volatile v4: - Address comments by Andy Shevchenko: added Reviewed-by: and a link to the previous discussion - Address comments by Yury Norov: - expand the bitmap to catch more corner cases - add code testing that bitmap_set_value() does not touch adjacent bits - add code testing the nbits==0 case - rename bitmap_{get,set}_value() to bitmap_{read,write}() v3: - switch to using bitmap_{set,get}_value() - change the expected bit pattern in test_set_get_value(), as the test was incorrectly assuming 0 is the LSB. --- lib/test_bitmap.c | 177 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 177 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index f2ea9f30c7c5d..a4195c7376840 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -71,6 +71,17 @@ __check_eq_uint(const char *srcfile, unsigned int line, return true; } +static bool __init +__check_eq_ulong(const char *srcfile, unsigned int line, + const unsigned long exp_ulong, unsigned long x) +{ + if (exp_ulong != x) { + pr_err("[%s:%u] expected %lu, got %lu\n", + srcfile, line, exp_ulong, x); + return false; + } + return true; +} static bool __init __check_eq_bitmap(const char *srcfile, unsigned int line, @@ -186,6 +197,7 @@ __check_eq_str(const char *srcfile, unsigned int line, }) #define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__) +#define expect_eq_ulong(...) __expect_eq(ulong, ##__VA_ARGS__) #define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__) #define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__) #define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__) @@ -1222,6 +1234,168 @@ static void __init test_bitmap_const_eval(void) BUILD_BUG_ON(~var != ~BIT(25)); } +/* + * Test bitmap should be big enough to include the cases when start is not in + * the first word, and start+nbits lands in the following word. + */ +#define TEST_BIT_LEN (1000) + +/* + * Helper function to test bitmap_write() overwriting the chosen byte pattern. + */ +static void __init test_bitmap_write_helper(const char *pattern) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN); + unsigned long w, r, bit; + int i, n, nbits; + + /* + * Only parse the pattern once and store the result in the intermediate + * bitmap. + */ + bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN); + + /* + * Check that writing a single bit does not accidentally touch the + * adjacent bits. + */ + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (bit = 0; bit <= 1; bit++) { + bitmap_write(bitmap, bit, i, 1); + __assign_bit(i, exp_bitmap, bit); + expect_eq_bitmap(exp_bitmap, bitmap, + TEST_BIT_LEN); + } + } + + /* Ensure writing 0 bits does not change anything. */ + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_write(bitmap, ~0UL, i, 0); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + } + + for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) { + w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL + : 0xdeadbeefUL; + w >>= (BITS_PER_LONG - nbits); + for (i = 0; i <= TEST_BIT_LEN - nbits; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (n = 0; n < nbits; n++) + __assign_bit(i + n, exp_bitmap, w & BIT(n)); + bitmap_write(bitmap, w, i, nbits); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + r = bitmap_read(bitmap, i, nbits); + expect_eq_ulong(r, w); + } + } +} + +static void __init test_bitmap_read_write(void) +{ + unsigned char *pattern[3] = {"", "all:1/2", "all"}; + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG; + unsigned long val; + int i, pi; + + /* + * Reading/writing zero bits should not crash the kernel. + * READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits)); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(zero_bits)); + + /* + * Reading/writing more than BITS_PER_LONG bits should not crash the + * kernel. READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1); + + /* + * Ensure that bitmap_read() reads the same value that was previously + * written, and two consequent values are correctly merged. + * The resulting bit pattern is asymmetric to rule out possible issues + * with bit numeration order. + */ + for (i = 0; i < TEST_BIT_LEN - 7; i++) { + bitmap_zero(bitmap, TEST_BIT_LEN); + + bitmap_write(bitmap, 0b10101UL, i, 5); + val = bitmap_read(bitmap, i, 5); + expect_eq_ulong(0b10101UL, val); + + bitmap_write(bitmap, 0b101UL, i + 5, 3); + val = bitmap_read(bitmap, i + 5, 3); + expect_eq_ulong(0b101UL, val); + + val = bitmap_read(bitmap, i, 8); + expect_eq_ulong(0b10110101UL, val); + } + + for (pi = 0; pi < ARRAY_SIZE(pattern); pi++) + test_bitmap_write_helper(pattern[pi]); +} + +static void __init test_bitmap_read_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val; + ktime_t time; + + bitmap_fill(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + /* + * Prevent the compiler from optimizing away the + * bitmap_read() by using its value. + */ + WRITE_ONCE(val, bitmap_read(bitmap, i, nbits)); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +static void __init test_bitmap_write_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val = 0xfeedface; + ktime_t time; + + bitmap_zero(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + bitmap_write(bitmap, val, i, nbits); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +#undef TEST_BIT_LEN + static void __init selftest(void) { test_zero_clear(); @@ -1237,6 +1411,9 @@ static void __init selftest(void) test_bitmap_cut(); test_bitmap_print_buf(); test_bitmap_const_eval(); + test_bitmap_read_write(); + test_bitmap_read_perf(); + test_bitmap_write_perf(); test_find_nth_bit(); test_for_each_set_bit();