diff mbox series

[v8,2/2] lib/test_bitmap: add tests for bitmap_{read,write}()

Message ID 20231023102327.3074212-2-glider@google.com (mailing list archive)
State New, archived
Headers show
Series [v8,1/2] lib/bitmap: add bitmap_{read,write}() | expand

Commit Message

Alexander Potapenko Oct. 23, 2023, 10:23 a.m. UTC
Add basic tests ensuring that values can be added at arbitrary positions
of the bitmap, including those spanning into the adjacent unsigned
longs.

Signed-off-by: Alexander Potapenko <glider@google.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

---
This patch was previously part of the "Implement MTE tag compression for
swapped pages" series
(https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/)

This patch was previously called
"lib/test_bitmap: add tests for bitmap_{set,get}_value()"
(https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/)
and
"lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned"
(https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/)

v8:
 - as requested by Andy Shevchenko, add tests for reading/writing
   sizes > BITS_PER_LONG

v7:
 - as requested by Yury Norov, add performance tests for bitmap_read()
   and bitmap_write()

v6:
 - use bitmap API to initialize test bitmaps
 - as requested by Yury Norov, do not check the return value of
   bitmap_read(..., 0)
 - fix a compiler warning on 32-bit systems

v5:
 - update patch title
 - address Yury Norov's comments:
   - rename the test cases
   - factor out test_bitmap_write_helper() to test writing over
     different background patterns;
   - add a test case copying a nontrivial value bit-by-bit;
   - drop volatile

v4:
 - Address comments by Andy Shevchenko: added Reviewed-by: and a link to
   the previous discussion
 - Address comments by Yury Norov:
   - expand the bitmap to catch more corner cases
   - add code testing that bitmap_set_value() does not touch adjacent
     bits
   - add code testing the nbits==0 case
   - rename bitmap_{get,set}_value() to bitmap_{read,write}()

v3:
 - switch to using bitmap_{set,get}_value()
 - change the expected bit pattern in test_set_get_value(),
   as the test was incorrectly assuming 0 is the LSB.
---
 lib/test_bitmap.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)

Comments

Andy Shevchenko Oct. 23, 2023, 11:32 a.m. UTC | #1
On Mon, Oct 23, 2023 at 12:23:27PM +0200, Alexander Potapenko wrote:
> Add basic tests ensuring that values can be added at arbitrary positions
> of the bitmap, including those spanning into the adjacent unsigned
> longs.

...

> +				val = bitmap_read(bitmap, i, nbits);
> +				(void)val;

Is it marked with __must_check? Otherwise why do we need this?
Alexander Potapenko Oct. 23, 2023, 1:50 p.m. UTC | #2
On Mon, Oct 23, 2023 at 1:32 PM Andy Shevchenko
<andriy.shevchenko@linux.intel.com> wrote:
>
> On Mon, Oct 23, 2023 at 12:23:27PM +0200, Alexander Potapenko wrote:
> > Add basic tests ensuring that values can be added at arbitrary positions
> > of the bitmap, including those spanning into the adjacent unsigned
> > longs.
>
> ...
>
> > +                             val = bitmap_read(bitmap, i, nbits);
> > +                             (void)val;
>
> Is it marked with __must_check? Otherwise why do we need this?

That was a weak attempt to prevent the compiler from completely
optimizing away the bitmap_read() calls, but I haven't really looked
at the result.
The reality is that even with this check the calls are deleted, and
the size of the function is only 68 bytes.
Replacing the val assignment with a WRITE_ONCE() actually ensures the
bitmap_read() calls are not deleted:

diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index ba567f53feff1..ae57ae48ef3ad 100644
--- a/lib/test_bitmap.c
+++ b/lib/test_bitmap.c
@@ -1360,8 +1360,7 @@ static void __init test_bitmap_read_perf(void)
                        for (i = 0; i < TEST_BIT_LEN; i++) {
                                if (i + nbits > TEST_BIT_LEN)
                                        break;
-                               val = bitmap_read(bitmap, i, nbits);
-                               (void)val;
+                               WRITE_ONCE(val, bitmap_read(bitmap, i, nbits));
                        }
                }
        }


> --
> With Best Regards,
> Andy Shevchenko
>
>


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Andy Shevchenko Oct. 23, 2023, 7:49 p.m. UTC | #3
On Mon, Oct 23, 2023 at 03:50:42PM +0200, Alexander Potapenko wrote:
> On Mon, Oct 23, 2023 at 1:32 PM Andy Shevchenko
> <andriy.shevchenko@linux.intel.com> wrote:
> > On Mon, Oct 23, 2023 at 12:23:27PM +0200, Alexander Potapenko wrote:

...

> > > +                             val = bitmap_read(bitmap, i, nbits);
> > > +                             (void)val;
> >
> > Is it marked with __must_check? Otherwise why do we need this?
> 
> That was a weak attempt to prevent the compiler from completely
> optimizing away the bitmap_read() calls, but I haven't really looked
> at the result.
> The reality is that even with this check the calls are deleted, and
> the size of the function is only 68 bytes.
> Replacing the val assignment with a WRITE_ONCE() actually ensures the
> bitmap_read() calls are not deleted:
> 
> diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
> index ba567f53feff1..ae57ae48ef3ad 100644
> --- a/lib/test_bitmap.c
> +++ b/lib/test_bitmap.c
> @@ -1360,8 +1360,7 @@ static void __init test_bitmap_read_perf(void)
>                         for (i = 0; i < TEST_BIT_LEN; i++) {
>                                 if (i + nbits > TEST_BIT_LEN)
>                                         break;
> -                               val = bitmap_read(bitmap, i, nbits);
> -                               (void)val;
> +                               WRITE_ONCE(val, bitmap_read(bitmap, i, nbits));
>                         }
>                 }
>         }

Okay, whatever you choose, please add a comment explaining this.
Yury Norov Oct. 23, 2023, 8:49 p.m. UTC | #4
On Mon, Oct 23, 2023 at 12:23:27PM +0200, Alexander Potapenko wrote:
> Add basic tests ensuring that values can be added at arbitrary positions
> of the bitmap, including those spanning into the adjacent unsigned
> longs.
> 
> Signed-off-by: Alexander Potapenko <glider@google.com>
> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> 
> ---
> This patch was previously part of the "Implement MTE tag compression for
> swapped pages" series
> (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/)
> 
> This patch was previously called
> "lib/test_bitmap: add tests for bitmap_{set,get}_value()"
> (https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/)
> and
> "lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned"
> (https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/)
> 
> v8:
>  - as requested by Andy Shevchenko, add tests for reading/writing
>    sizes > BITS_PER_LONG
> 
> v7:
>  - as requested by Yury Norov, add performance tests for bitmap_read()
>    and bitmap_write()
> 
> v6:
>  - use bitmap API to initialize test bitmaps
>  - as requested by Yury Norov, do not check the return value of
>    bitmap_read(..., 0)
>  - fix a compiler warning on 32-bit systems
> 
> v5:
>  - update patch title
>  - address Yury Norov's comments:
>    - rename the test cases
>    - factor out test_bitmap_write_helper() to test writing over
>      different background patterns;
>    - add a test case copying a nontrivial value bit-by-bit;
>    - drop volatile
> 
> v4:
>  - Address comments by Andy Shevchenko: added Reviewed-by: and a link to
>    the previous discussion
>  - Address comments by Yury Norov:
>    - expand the bitmap to catch more corner cases
>    - add code testing that bitmap_set_value() does not touch adjacent
>      bits
>    - add code testing the nbits==0 case
>    - rename bitmap_{get,set}_value() to bitmap_{read,write}()
> 
> v3:
>  - switch to using bitmap_{set,get}_value()
>  - change the expected bit pattern in test_set_get_value(),
>    as the test was incorrectly assuming 0 is the LSB.
> ---
>  lib/test_bitmap.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 174 insertions(+)
> 
> diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
> index f2ea9f30c7c5d..ba567f53feff1 100644
> --- a/lib/test_bitmap.c
> +++ b/lib/test_bitmap.c
> @@ -71,6 +71,17 @@ __check_eq_uint(const char *srcfile, unsigned int line,
>  	return true;
>  }
>  
> +static bool __init
> +__check_eq_ulong(const char *srcfile, unsigned int line,
> +		 const unsigned long exp_ulong, unsigned long x)
> +{
> +	if (exp_ulong != x) {
> +		pr_err("[%s:%u] expected %lu, got %lu\n",
> +			srcfile, line, exp_ulong, x);
> +		return false;
> +	}
> +	return true;
> +}
>  
>  static bool __init
>  __check_eq_bitmap(const char *srcfile, unsigned int line,
> @@ -186,6 +197,7 @@ __check_eq_str(const char *srcfile, unsigned int line,
>  	})
>  
>  #define expect_eq_uint(...)		__expect_eq(uint, ##__VA_ARGS__)
> +#define expect_eq_ulong(...)		__expect_eq(ulong, ##__VA_ARGS__)
>  #define expect_eq_bitmap(...)		__expect_eq(bitmap, ##__VA_ARGS__)
>  #define expect_eq_pbl(...)		__expect_eq(pbl, ##__VA_ARGS__)
>  #define expect_eq_u32_array(...)	__expect_eq(u32_array, ##__VA_ARGS__)
> @@ -1222,6 +1234,165 @@ static void __init test_bitmap_const_eval(void)
>  	BUILD_BUG_ON(~var != ~BIT(25));
>  }
>  
> +/*
> + * Test bitmap should be big enough to include the cases when start is not in
> + * the first word, and start+nbits lands in the following word.
> + */
> +#define TEST_BIT_LEN (1000)
> +
> +/*
> + * Helper function to test bitmap_write() overwriting the chosen byte pattern.
> + */
> +static void __init test_bitmap_write_helper(const char *pattern)
> +{
> +	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
> +	DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN);
> +	DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN);
> +	unsigned long w, r, bit;
> +	int i, n, nbits;
> +
> +	/*
> +	 * Only parse the pattern once and store the result in the intermediate
> +	 * bitmap.
> +	 */
> +	bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN);
> +
> +	/*
> +	 * Check that writing a single bit does not accidentally touch the
> +	 * adjacent bits.
> +	 */
> +	for (i = 0; i < TEST_BIT_LEN; i++) {
> +		bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
> +		bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
> +		for (bit = 0; bit <= 1; bit++) {
> +			bitmap_write(bitmap, bit, i, 1);
> +			__assign_bit(i, exp_bitmap, bit);
> +			expect_eq_bitmap(exp_bitmap, bitmap,
> +					 TEST_BIT_LEN);
> +		}
> +	}
> +
> +	/* Ensure writing 0 bits does not change anything. */
> +	bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
> +	bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
> +	for (i = 0; i < TEST_BIT_LEN; i++) {
> +		bitmap_write(bitmap, ~0UL, i, 0);
> +		expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
> +	}
> +
> +	for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) {
> +		w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL
> +					     : 0xdeadbeefUL;
> +		w >>= (BITS_PER_LONG - nbits);
> +		for (i = 0; i <= TEST_BIT_LEN - nbits; i++) {
> +			bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
> +			bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
> +			for (n = 0; n < nbits; n++)
> +				__assign_bit(i + n, exp_bitmap, w & BIT(n));
> +			bitmap_write(bitmap, w, i, nbits);
> +			expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
> +			r = bitmap_read(bitmap, i, nbits);
> +			expect_eq_ulong(r, w);
> +		}
> +	}
> +}
> +
> +static void __init test_bitmap_read_write(void)
> +{
> +	unsigned char *pattern[3] = {"", "all:1/2", "all"};
> +	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
> +	unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG;
> +	unsigned long val;
> +	int i, pi;
> +
> +	/*
> +	 * Reading/writing zero bits should not crash the kernel.
> +	 * READ_ONCE() prevents constant folding.
> +	 */
> +	bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits));
> +	/* Return value of bitmap_read() is undefined here. */
> +	bitmap_read(NULL, 0, READ_ONCE(zero_bits));
> +
> +	/*
> +	 * Reading/writing more than BITS_PER_LONG bits should not crash the
> +	 * kernel. READ_ONCE() prevents constant folding.
> +	 */
> +	bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1);
> +	/* Return value of bitmap_read() is undefined here. */
> +	bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1);
> +
> +	/*
> +	 * Ensure that bitmap_read() reads the same value that was previously
> +	 * written, and two consequent values are correctly merged.
> +	 * The resulting bit pattern is asymmetric to rule out possible issues
> +	 * with bit numeration order.
> +	 */
> +	for (i = 0; i < TEST_BIT_LEN - 7; i++) {
> +		bitmap_zero(bitmap, TEST_BIT_LEN);
> +
> +		bitmap_write(bitmap, 0b10101UL, i, 5);
> +		val = bitmap_read(bitmap, i, 5);
> +		expect_eq_ulong(0b10101UL, val);
> +
> +		bitmap_write(bitmap, 0b101UL, i + 5, 3);
> +		val = bitmap_read(bitmap, i + 5, 3);
> +		expect_eq_ulong(0b101UL, val);
> +
> +		val = bitmap_read(bitmap, i, 8);
> +		expect_eq_ulong(0b10110101UL, val);
> +	}
> +
> +	for (pi = 0; pi < ARRAY_SIZE(pattern); pi++)
> +		test_bitmap_write_helper(pattern[pi]);
> +}
> +
> +static void __init test_bitmap_read_perf(void)
> +{
> +	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
> +	unsigned int cnt, nbits, i;
> +	unsigned long val;
> +	ktime_t time;
> +
> +	bitmap_fill(bitmap, TEST_BIT_LEN);
> +	time = ktime_get();
> +	for (cnt = 0; cnt < 5; cnt++) {
> +		for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
> +			for (i = 0; i < TEST_BIT_LEN; i++) {
> +				if (i + nbits > TEST_BIT_LEN)
> +					break;
> +				val = bitmap_read(bitmap, i, nbits);
> +				(void)val;
> +			}
> +		}
> +	}
> +	time = ktime_get() - time;
> +	pr_err("Time spent in %s:\t%llu\n", __func__, time);
> +}
> +
> +static void __init test_bitmap_write_perf(void)
> +{
> +	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
> +	unsigned int cnt, nbits, i;
> +	unsigned long val = 0xfeedface;
> +	ktime_t time;
> +
> +	bitmap_zero(bitmap, TEST_BIT_LEN);
> +	time = ktime_get();
> +	for (cnt = 0; cnt < 5; cnt++) {
> +		for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
> +			for (i = 0; i < TEST_BIT_LEN; i++) {
> +				if (i + nbits > TEST_BIT_LEN)
> +					break;
> +				bitmap_write(bitmap, val, i, nbits);
> +			}
> +		}
> +	}
> +	time = ktime_get() - time;
> +	pr_err("Time spent in %s:\t%llu\n", __func__, time);

For the perf part, can you add the output example to the commit
message, and compare numbers with non-optimized for-loop()?

> +}
> +
> +#undef TEST_BIT_LEN
> +
>  static void __init selftest(void)
>  {
>  	test_zero_clear();
> @@ -1237,6 +1408,9 @@ static void __init selftest(void)
>  	test_bitmap_cut();
>  	test_bitmap_print_buf();
>  	test_bitmap_const_eval();
> +	test_bitmap_read_write();
> +	test_bitmap_read_perf();
> +	test_bitmap_write_perf();
>  
>  	test_find_nth_bit();
>  	test_for_each_set_bit();
> -- 
> 2.42.0.655.g421f12c284-goog
Alexander Potapenko Oct. 24, 2023, 11:56 a.m. UTC | #5
> > +
> > +static void __init test_bitmap_write_perf(void)
> > +{
> > +     DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
> > +     unsigned int cnt, nbits, i;
> > +     unsigned long val = 0xfeedface;
> > +     ktime_t time;
> > +
> > +     bitmap_zero(bitmap, TEST_BIT_LEN);
> > +     time = ktime_get();
> > +     for (cnt = 0; cnt < 5; cnt++) {
> > +             for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
> > +                     for (i = 0; i < TEST_BIT_LEN; i++) {
> > +                             if (i + nbits > TEST_BIT_LEN)
> > +                                     break;
> > +                             bitmap_write(bitmap, val, i, nbits);
> > +                     }
> > +             }
> > +     }
> > +     time = ktime_get() - time;
> > +     pr_err("Time spent in %s:\t%llu\n", __func__, time);
>
> For the perf part, can you add the output example to the commit
> message, and compare numbers with non-optimized for-loop()?
>

I don't understand the purpose of this comparison.
It is evident that bitmap_write() is faster than the non-optimized
loop doing BITS_PER_LONG reads and writes of a single bit.
It is moot how much faster the current implementation is, because the
loop implementation is just a concept describing the behavior of
bitmap_write().

My understanding was that the performance tests will help if someone
decides to optimize bitmap_write() further - in that case they would
be able to compare the performance before and after their changes.
But I fail to see how it helps to compare the current implementation
to something that is a priori slower.
diff mbox series

Patch

diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index f2ea9f30c7c5d..ba567f53feff1 100644
--- a/lib/test_bitmap.c
+++ b/lib/test_bitmap.c
@@ -71,6 +71,17 @@  __check_eq_uint(const char *srcfile, unsigned int line,
 	return true;
 }
 
+static bool __init
+__check_eq_ulong(const char *srcfile, unsigned int line,
+		 const unsigned long exp_ulong, unsigned long x)
+{
+	if (exp_ulong != x) {
+		pr_err("[%s:%u] expected %lu, got %lu\n",
+			srcfile, line, exp_ulong, x);
+		return false;
+	}
+	return true;
+}
 
 static bool __init
 __check_eq_bitmap(const char *srcfile, unsigned int line,
@@ -186,6 +197,7 @@  __check_eq_str(const char *srcfile, unsigned int line,
 	})
 
 #define expect_eq_uint(...)		__expect_eq(uint, ##__VA_ARGS__)
+#define expect_eq_ulong(...)		__expect_eq(ulong, ##__VA_ARGS__)
 #define expect_eq_bitmap(...)		__expect_eq(bitmap, ##__VA_ARGS__)
 #define expect_eq_pbl(...)		__expect_eq(pbl, ##__VA_ARGS__)
 #define expect_eq_u32_array(...)	__expect_eq(u32_array, ##__VA_ARGS__)
@@ -1222,6 +1234,165 @@  static void __init test_bitmap_const_eval(void)
 	BUILD_BUG_ON(~var != ~BIT(25));
 }
 
+/*
+ * Test bitmap should be big enough to include the cases when start is not in
+ * the first word, and start+nbits lands in the following word.
+ */
+#define TEST_BIT_LEN (1000)
+
+/*
+ * Helper function to test bitmap_write() overwriting the chosen byte pattern.
+ */
+static void __init test_bitmap_write_helper(const char *pattern)
+{
+	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+	DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN);
+	DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN);
+	unsigned long w, r, bit;
+	int i, n, nbits;
+
+	/*
+	 * Only parse the pattern once and store the result in the intermediate
+	 * bitmap.
+	 */
+	bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN);
+
+	/*
+	 * Check that writing a single bit does not accidentally touch the
+	 * adjacent bits.
+	 */
+	for (i = 0; i < TEST_BIT_LEN; i++) {
+		bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+		bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+		for (bit = 0; bit <= 1; bit++) {
+			bitmap_write(bitmap, bit, i, 1);
+			__assign_bit(i, exp_bitmap, bit);
+			expect_eq_bitmap(exp_bitmap, bitmap,
+					 TEST_BIT_LEN);
+		}
+	}
+
+	/* Ensure writing 0 bits does not change anything. */
+	bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+	bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+	for (i = 0; i < TEST_BIT_LEN; i++) {
+		bitmap_write(bitmap, ~0UL, i, 0);
+		expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
+	}
+
+	for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) {
+		w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL
+					     : 0xdeadbeefUL;
+		w >>= (BITS_PER_LONG - nbits);
+		for (i = 0; i <= TEST_BIT_LEN - nbits; i++) {
+			bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+			bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+			for (n = 0; n < nbits; n++)
+				__assign_bit(i + n, exp_bitmap, w & BIT(n));
+			bitmap_write(bitmap, w, i, nbits);
+			expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
+			r = bitmap_read(bitmap, i, nbits);
+			expect_eq_ulong(r, w);
+		}
+	}
+}
+
+static void __init test_bitmap_read_write(void)
+{
+	unsigned char *pattern[3] = {"", "all:1/2", "all"};
+	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+	unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG;
+	unsigned long val;
+	int i, pi;
+
+	/*
+	 * Reading/writing zero bits should not crash the kernel.
+	 * READ_ONCE() prevents constant folding.
+	 */
+	bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits));
+	/* Return value of bitmap_read() is undefined here. */
+	bitmap_read(NULL, 0, READ_ONCE(zero_bits));
+
+	/*
+	 * Reading/writing more than BITS_PER_LONG bits should not crash the
+	 * kernel. READ_ONCE() prevents constant folding.
+	 */
+	bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1);
+	/* Return value of bitmap_read() is undefined here. */
+	bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1);
+
+	/*
+	 * Ensure that bitmap_read() reads the same value that was previously
+	 * written, and two consequent values are correctly merged.
+	 * The resulting bit pattern is asymmetric to rule out possible issues
+	 * with bit numeration order.
+	 */
+	for (i = 0; i < TEST_BIT_LEN - 7; i++) {
+		bitmap_zero(bitmap, TEST_BIT_LEN);
+
+		bitmap_write(bitmap, 0b10101UL, i, 5);
+		val = bitmap_read(bitmap, i, 5);
+		expect_eq_ulong(0b10101UL, val);
+
+		bitmap_write(bitmap, 0b101UL, i + 5, 3);
+		val = bitmap_read(bitmap, i + 5, 3);
+		expect_eq_ulong(0b101UL, val);
+
+		val = bitmap_read(bitmap, i, 8);
+		expect_eq_ulong(0b10110101UL, val);
+	}
+
+	for (pi = 0; pi < ARRAY_SIZE(pattern); pi++)
+		test_bitmap_write_helper(pattern[pi]);
+}
+
+static void __init test_bitmap_read_perf(void)
+{
+	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+	unsigned int cnt, nbits, i;
+	unsigned long val;
+	ktime_t time;
+
+	bitmap_fill(bitmap, TEST_BIT_LEN);
+	time = ktime_get();
+	for (cnt = 0; cnt < 5; cnt++) {
+		for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
+			for (i = 0; i < TEST_BIT_LEN; i++) {
+				if (i + nbits > TEST_BIT_LEN)
+					break;
+				val = bitmap_read(bitmap, i, nbits);
+				(void)val;
+			}
+		}
+	}
+	time = ktime_get() - time;
+	pr_err("Time spent in %s:\t%llu\n", __func__, time);
+}
+
+static void __init test_bitmap_write_perf(void)
+{
+	DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+	unsigned int cnt, nbits, i;
+	unsigned long val = 0xfeedface;
+	ktime_t time;
+
+	bitmap_zero(bitmap, TEST_BIT_LEN);
+	time = ktime_get();
+	for (cnt = 0; cnt < 5; cnt++) {
+		for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
+			for (i = 0; i < TEST_BIT_LEN; i++) {
+				if (i + nbits > TEST_BIT_LEN)
+					break;
+				bitmap_write(bitmap, val, i, nbits);
+			}
+		}
+	}
+	time = ktime_get() - time;
+	pr_err("Time spent in %s:\t%llu\n", __func__, time);
+}
+
+#undef TEST_BIT_LEN
+
 static void __init selftest(void)
 {
 	test_zero_clear();
@@ -1237,6 +1408,9 @@  static void __init selftest(void)
 	test_bitmap_cut();
 	test_bitmap_print_buf();
 	test_bitmap_const_eval();
+	test_bitmap_read_write();
+	test_bitmap_read_perf();
+	test_bitmap_write_perf();
 
 	test_find_nth_bit();
 	test_for_each_set_bit();