From patchwork Mon Oct 30 15:32:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13440685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D986C4332F for ; Mon, 30 Oct 2023 15:32:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=7asfLNM+T3WB8SeNfv9AhChU4MNxz3ODJwwjZUeyuGc=; b=f+x oSmDLxbbRGIR2pAU3XKGbmEoqQ7P3ztkKhu/951NRd4oA0yZvHXgHCQpn1NgoevyODgtjxWJluWco RCcQcB71W+LBa5byH72I2oOI9eN0RNaEhCc26ezMxuwFHtp/7Fe7gPkFVXj7e0cr7XNnwVh3w4I5a 8HTwKFYnWpWjV5yGEvxO39XUpihCvU+o9wK8q0yiaXLpBMxxrRc44j4FXo2Ly85SgxaORI4KtQQed Y4PheNc8ZblGksxkniTLmBtxdjHoaQpdjJYfitDULXfsERn/JFm/Ox4rZpmnHzqpoP6y2DVCKwuah 06LjKzmeGUj/4BMSjPvFJ4W3KU0GBCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxUFc-003Z2a-0Y; Mon, 30 Oct 2023 15:32:24 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxUFW-003Z1h-29 for linux-arm-kernel@lists.infradead.org; Mon, 30 Oct 2023 15:32:20 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-da2b8af7e89so2287795276.1 for ; Mon, 30 Oct 2023 08:32:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698679935; x=1699284735; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=6N1/Pi/gcjpAA6FjFn+3l/Xt1rwHVQqRPaAle2Pb3pE=; b=CS44kZYsygJVtIW1IhaZuommg/0RorsRVScjd0dRTPdvAl2KGHNoplp9HwMZAd1Ni2 k7oa/eDRxRqOq/wV7fey0lFBjjuJFfYduLS5N9fgCCZRdKpnSxahjab1nZ545K+klERr 14UKqS2O4OQnZxrNwJHC4SXvzIOwjG0yv+E7Lvs8xN4V9JT923kRP032tG9MhBlcHaDn mNEqpRW+xFyIp8+UIYl7uGD6T2BMdIgHOoZmSAUJJ+OBYBkjySbI7AfgxPT6UYs/f9Bc x6pNIzgLUqCdRnfuHnFXxat+p9m99CqXuWjgdbJ2+ZkZIlvqJFy4FDEZALlXM/8QogDh 0WSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698679935; x=1699284735; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=6N1/Pi/gcjpAA6FjFn+3l/Xt1rwHVQqRPaAle2Pb3pE=; b=Z2O0fajRo3hp4nAzTD1mWCAhJPVQdz9kqwqJuRkncR9nlh7izwZqUplWsIOU8nzR6B DccUcHlWi+e/LMnKGslcEV9WQOzS8f55FsARvhAS/CIArj9gIs126E78DPDMCfQjzI7g YNRPDVZr39W+RFbk0Vx4dyHuYyFpnySLCxG/sYMfpbcjtqw5Uey3e1/79Jdd2FTf2nDR wbpnBlqedgj/0DuGCEItex9pFrCrvff3aSohmW9/+1UePYYNyQjh/viDKuirKok5KF6H Inyk6svI0S/8tnkM6ejlJ6mAe9U+ljoqxMqdBZZ+sfAxKSL3JNApoPGXluDHRPFnPUvm ArAA== X-Gm-Message-State: AOJu0YyrmevpOAaulYI5KVtXWyQe+jeuNV3m5yy6n2wzXHrQD8eulejE 1oeRqWiEPx13odsGFrUwX71/YtsZPRU= X-Google-Smtp-Source: AGHT+IHLNpXD5BTANcd2y5P/2OT6ceikS1KjRWdJLqWaOIKg+YqFppw3JUNH2nWBDF+whKwt+P4acLsjddw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:132e:31bc:8300:d4e1]) (user=glider job=sendgmr) by 2002:a25:ad5a:0:b0:d90:e580:88e5 with SMTP id l26-20020a25ad5a000000b00d90e58088e5mr184178ybe.10.1698679934767; Mon, 30 Oct 2023 08:32:14 -0700 (PDT) Date: Mon, 30 Oct 2023 16:32:09 +0100 Mime-Version: 1.0 X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231030153210.139512-1-glider@google.com> Subject: [PATCH v11 1/2] lib/bitmap: add bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org, Arnd Bergmann X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_083218_730253_3C091FAE X-CRM114-Status: GOOD ( 19.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Syed Nayyar Waris The two new functions allow reading/writing values of length up to BITS_PER_LONG bits at arbitrary position in the bitmap. The code was taken from "bitops: Introduce the for_each_set_clump macro" by Syed Nayyar Waris with a number of changes and simplifications: - instead of using roundup(), which adds an unnecessary dependency on , we calculate space as BITS_PER_LONG-offset; - indentation is reduced by not using else-clauses (suggested by checkpatch for bitmap_get_value()); - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read() and bitmap_write(); - some redundant computations are omitted. Cc: Arnd Bergmann Signed-off-by: Syed Nayyar Waris Signed-off-by: William Breathitt Gray Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338.1592224129.git.syednwaris@gmail.com/ Suggested-by: Yury Norov Co-developed-by: Alexander Potapenko Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko --- This patch was previously part of the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/) This patch was previously called "lib/bitmap: add bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-2-glider@google.com/) v11: - rearrange whitespace as requested by Andy Shevchenko, add Reviewed-by:, update a comment v10: - update comments as requested by Andy Shevchenko v8: - as suggested by Andy Shevchenko, handle reads/writes of more than BITS_PER_LONG bits, add a note for 32-bit systems v7: - Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes: - update code comments; - get rid of GENMASK(); - s/assign_bit/__assign_bit; - more vertical whitespace for better readability; - more compact code for bitmap_write() (now for real) v6: - As suggested by Yury Norov, do not require bitmap_read(..., 0) to return 0. v5: - Address comments by Yury Norov: - updated code comments and patch title/description - replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits) - more compact bitmap_write() implementation v4: - Address comments by Andy Shevchenko and Yury Norov: - prevent passing values >= 64 to GENMASK() - fix commit authorship - change comments - check for unlikely(nbits==0) - drop unnecessary const declarations - fix kernel-doc comments - rename bitmap_{get,set}_value() to bitmap_{read,write}() --- include/linux/bitmap.h | 77 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 03644237e1efb..7dd00e2e6d539 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -77,6 +77,10 @@ struct device; * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start + * bitmap_read(map, start, nbits) Read an nbits-sized value from + * map at start + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to + * map at start * * Note, bitmap_zero() and bitmap_fill() operate over the region of * unsigned longs, that is, bits behind bitmap till the unsigned long @@ -599,6 +603,79 @@ static inline void bitmap_set_value8(unsigned long *map, unsigned long value, map[index] |= value << offset; } +/** + * bitmap_read - read a value of n-bits from the memory region + * @map: address to the bitmap memory region + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG + * + * Returns: value of @nbits bits located at the @start bit offset within the + * @map memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return + * value is undefined. + */ +static inline unsigned long bitmap_read(const unsigned long *map, + unsigned long start, + unsigned long nbits) +{ + size_t index = BIT_WORD(start); + unsigned long offset = start % BITS_PER_LONG; + unsigned long space = BITS_PER_LONG - offset; + unsigned long value_low, value_high; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return 0; + + if (space >= nbits) + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits); + + value_low = map[index] & BITMAP_FIRST_WORD_MASK(start); + value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits); + return (value_low >> offset) | (value_high << space); +} + +/** + * bitmap_write - write n-bit value within a memory region + * @map: address to the bitmap memory region + * @value: value to write, clamped to nbits + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG. + * + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(), + * i.e. bits beyond @nbits are ignored: + * + * for (bit = 0; bit < nbits; bit++) + * __assign_bit(start + bit, bitmap, val & BIT(bit)); + * + * For @nbits == 0 and @nbits > BITS_PER_LONG no writes are performed. + */ +static inline void bitmap_write(unsigned long *map, unsigned long value, + unsigned long start, unsigned long nbits) +{ + size_t index; + unsigned long offset; + unsigned long space; + unsigned long mask; + bool fit; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return; + + mask = BITMAP_LAST_WORD_MASK(nbits); + value &= mask; + offset = start % BITS_PER_LONG; + space = BITS_PER_LONG - offset; + fit = space >= nbits; + index = BIT_WORD(start); + + map[index] &= (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(start)); + map[index] |= value << offset; + if (fit) + return; + + map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits); + map[index + 1] |= (value >> space); +} + #endif /* __ASSEMBLY__ */ #endif /* __LINUX_BITMAP_H */ From patchwork Mon Oct 30 15:32:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13440686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6286C4332F for ; Mon, 30 Oct 2023 15:33:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=O1zQgOqOsmEjR85lJWJ5Ibigx3PYfwJo6QDHskVE4LM=; b=hr+uNcbilKmT4cfgPWl9Hj0dAA OEJNv0qNWn6vYEQibSlbine6iRyIKbepscl17v8hTU2tMhhPWqHe5dvWEOBT/3rTU3h/t7Bylhawf n04qzQ5/fqbk9Yw9io9Ysb6PS5O8FyjtXGWbH/uKVeIrOHSQ3pdNi7nhiAy87T2q279yeBJcKY+PT QQ7xdp+FerVrp2pstFswv54FWPHEgI8Jfnuefw7kHyAE96ujE6/Rrf7YzM0EVw2QbOlaidSL0Nkd3 ddOVFBLsiWeR6xHbmjZrYn/yv4QIp4Wm8skpMS6iPK6Z4fJCNXh5D4pmCWOOIPEhIFS9oTpVYqb9r /JTgTtwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxUFk-003Z3b-39; Mon, 30 Oct 2023 15:32:32 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxUFa-003Z1m-1q for linux-arm-kernel@lists.infradead.org; Mon, 30 Oct 2023 15:32:24 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-32d9751ca2cso2118835f8f.3 for ; Mon, 30 Oct 2023 08:32:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698679938; x=1699284738; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=00X+K7tQIGqj3wkduJfRhjiel20GeeDllIRy1nuGhjg=; b=CKLOqGqZzuNFLgJ+jN/qt68qUK71aioXhVzjqdV9EW+k8fHnopgo4CTew7XKFMlTXQ 2yIna/Qq0JW4dsyq1TpqtkXVmFuQt4G74/6BzbPsYkgYuslvJlxGmkyG7NhHQBTtPQGq ZOxsDb9eyL1elnA4+8hLQajVo7CjmrVab3eGcIIbJPfUcmB1ge2MO/61pvSUyigIRGyi 4POjWLdRk9NyzgcH0FcNcxTLrv08MFf6zwa1/Szam0R3lhrcRKytjQ6WRvonyZoxSGdt FvcLIY0rfmlyucvA0ndTdSy5zF2JYGCX0c7rSgTLe2uVmD1+Mxxn177LjcRL1U/vUlrZ sIkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698679938; x=1699284738; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=00X+K7tQIGqj3wkduJfRhjiel20GeeDllIRy1nuGhjg=; b=lmRx8uDazSMmH0CG9db9SpTptBLnF1tEB8Djc7ErGEGdy18XiMpUl7IpwVG1Ar7F84 FSYe57T2cBatmPeuPomHO5BVv0AiFGNt+5atN4ZFnukMonTf3FoKPI2qCj8XtENc8MHZ DlMT93RKqHZ/iwPG71gn3zPExaAClqb7npAgHuJaIEbuuOd2TnVzWqN96SygUBnlIFHu 1Re1DrVAG94B5iWuUOXb22wtC9YsI1Au1F7UasIBZw+BTaRlEaHiyachlyTmunRGwHNE xvVCGn0XtTlxoCqc7p4ivDTk73Vs3bNrizXEh4Bp/pVFKedqln+qXVniA5btky3Mnx0q aLWA== X-Gm-Message-State: AOJu0YwylqAGbaTN1Yv389LOXcW7HwX/RnJqVuCWCrgXHa06HHd4jfa0 Cy6WVN5nLrBN9nyIcM2aREec/l6zFqw= X-Google-Smtp-Source: AGHT+IGjoe7dCjaDEHgKsr9JK0pXSaJI8CjwQUeQ/C0wV7m1w/4KFnJJ9AyUOo86EVEVB/0KyKLauZxi6XU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:132e:31bc:8300:d4e1]) (user=glider job=sendgmr) by 2002:adf:f94a:0:b0:32d:a117:fda6 with SMTP id q10-20020adff94a000000b0032da117fda6mr82163wrr.4.1698679937742; Mon, 30 Oct 2023 08:32:17 -0700 (PDT) Date: Mon, 30 Oct 2023 16:32:10 +0100 In-Reply-To: <20231030153210.139512-1-glider@google.com> Mime-Version: 1.0 References: <20231030153210.139512-1-glider@google.com> X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231030153210.139512-2-glider@google.com> Subject: [PATCH v11 2/2] lib/test_bitmap: add tests for bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_083222_620545_7C448E1D X-CRM114-Status: GOOD ( 23.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add basic tests ensuring that values can be added at arbitrary positions of the bitmap, including those spanning into the adjacent unsigned longs. Two new performance tests, test_bitmap_read_perf() and test_bitmap_write_perf(), can be used to assess future performance improvements of bitmap_read() and bitmap_write(): [ 0.431119][ T1] test_bitmap: Time spent in test_bitmap_read_perf: 615253 [ 0.433197][ T1] test_bitmap: Time spent in test_bitmap_write_perf: 916313 (numbers from a Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz machine running QEMU). Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko --- This patch was previously part of the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/) This patch was previously called "lib/test_bitmap: add tests for bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/) and "lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned" (https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/) v9: - use WRITE_ONCE() to prevent optimizations in test_bitmap_read_perf() - update patch description v8: - as requested by Andy Shevchenko, add tests for reading/writing sizes > BITS_PER_LONG v7: - as requested by Yury Norov, add performance tests for bitmap_read() and bitmap_write() v6: - use bitmap API to initialize test bitmaps - as requested by Yury Norov, do not check the return value of bitmap_read(..., 0) - fix a compiler warning on 32-bit systems v5: - update patch title - address Yury Norov's comments: - rename the test cases - factor out test_bitmap_write_helper() to test writing over different background patterns; - add a test case copying a nontrivial value bit-by-bit; - drop volatile v4: - Address comments by Andy Shevchenko: added Reviewed-by: and a link to the previous discussion - Address comments by Yury Norov: - expand the bitmap to catch more corner cases - add code testing that bitmap_set_value() does not touch adjacent bits - add code testing the nbits==0 case - rename bitmap_{get,set}_value() to bitmap_{read,write}() v3: - switch to using bitmap_{set,get}_value() - change the expected bit pattern in test_set_get_value(), as the test was incorrectly assuming 0 is the LSB. --- lib/test_bitmap.c | 177 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 177 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index f2ea9f30c7c5d..a4195c7376840 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -71,6 +71,17 @@ __check_eq_uint(const char *srcfile, unsigned int line, return true; } +static bool __init +__check_eq_ulong(const char *srcfile, unsigned int line, + const unsigned long exp_ulong, unsigned long x) +{ + if (exp_ulong != x) { + pr_err("[%s:%u] expected %lu, got %lu\n", + srcfile, line, exp_ulong, x); + return false; + } + return true; +} static bool __init __check_eq_bitmap(const char *srcfile, unsigned int line, @@ -186,6 +197,7 @@ __check_eq_str(const char *srcfile, unsigned int line, }) #define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__) +#define expect_eq_ulong(...) __expect_eq(ulong, ##__VA_ARGS__) #define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__) #define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__) #define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__) @@ -1222,6 +1234,168 @@ static void __init test_bitmap_const_eval(void) BUILD_BUG_ON(~var != ~BIT(25)); } +/* + * Test bitmap should be big enough to include the cases when start is not in + * the first word, and start+nbits lands in the following word. + */ +#define TEST_BIT_LEN (1000) + +/* + * Helper function to test bitmap_write() overwriting the chosen byte pattern. + */ +static void __init test_bitmap_write_helper(const char *pattern) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN); + unsigned long w, r, bit; + int i, n, nbits; + + /* + * Only parse the pattern once and store the result in the intermediate + * bitmap. + */ + bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN); + + /* + * Check that writing a single bit does not accidentally touch the + * adjacent bits. + */ + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (bit = 0; bit <= 1; bit++) { + bitmap_write(bitmap, bit, i, 1); + __assign_bit(i, exp_bitmap, bit); + expect_eq_bitmap(exp_bitmap, bitmap, + TEST_BIT_LEN); + } + } + + /* Ensure writing 0 bits does not change anything. */ + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_write(bitmap, ~0UL, i, 0); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + } + + for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) { + w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL + : 0xdeadbeefUL; + w >>= (BITS_PER_LONG - nbits); + for (i = 0; i <= TEST_BIT_LEN - nbits; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (n = 0; n < nbits; n++) + __assign_bit(i + n, exp_bitmap, w & BIT(n)); + bitmap_write(bitmap, w, i, nbits); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + r = bitmap_read(bitmap, i, nbits); + expect_eq_ulong(r, w); + } + } +} + +static void __init test_bitmap_read_write(void) +{ + unsigned char *pattern[3] = {"", "all:1/2", "all"}; + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG; + unsigned long val; + int i, pi; + + /* + * Reading/writing zero bits should not crash the kernel. + * READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits)); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(zero_bits)); + + /* + * Reading/writing more than BITS_PER_LONG bits should not crash the + * kernel. READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1); + + /* + * Ensure that bitmap_read() reads the same value that was previously + * written, and two consequent values are correctly merged. + * The resulting bit pattern is asymmetric to rule out possible issues + * with bit numeration order. + */ + for (i = 0; i < TEST_BIT_LEN - 7; i++) { + bitmap_zero(bitmap, TEST_BIT_LEN); + + bitmap_write(bitmap, 0b10101UL, i, 5); + val = bitmap_read(bitmap, i, 5); + expect_eq_ulong(0b10101UL, val); + + bitmap_write(bitmap, 0b101UL, i + 5, 3); + val = bitmap_read(bitmap, i + 5, 3); + expect_eq_ulong(0b101UL, val); + + val = bitmap_read(bitmap, i, 8); + expect_eq_ulong(0b10110101UL, val); + } + + for (pi = 0; pi < ARRAY_SIZE(pattern); pi++) + test_bitmap_write_helper(pattern[pi]); +} + +static void __init test_bitmap_read_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val; + ktime_t time; + + bitmap_fill(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + /* + * Prevent the compiler from optimizing away the + * bitmap_read() by using its value. + */ + WRITE_ONCE(val, bitmap_read(bitmap, i, nbits)); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +static void __init test_bitmap_write_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val = 0xfeedface; + ktime_t time; + + bitmap_zero(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + bitmap_write(bitmap, val, i, nbits); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +#undef TEST_BIT_LEN + static void __init selftest(void) { test_zero_clear(); @@ -1237,6 +1411,9 @@ static void __init selftest(void) test_bitmap_cut(); test_bitmap_print_buf(); test_bitmap_const_eval(); + test_bitmap_read_write(); + test_bitmap_read_perf(); + test_bitmap_write_perf(); test_find_nth_bit(); test_for_each_set_bit();