diff mbox series

md/md-bitmap: fix writing non bitmap pages

Message ID 20240606153223.2460253-1-ofir.gal@volumez.com (mailing list archive)
State Superseded, archived
Headers show
Series md/md-bitmap: fix writing non bitmap pages | expand

Checks

Context Check Description
mdraidci/vmtest-md-6_11-PR fail PR summary
mdraidci/vmtest-md-6_11-VM_Test-0 success Logs for Lint
mdraidci/vmtest-md-6_11-VM_Test-1 success Logs for ShellCheck
mdraidci/vmtest-md-6_11-VM_Test-2 success Logs for Unittests
mdraidci/vmtest-md-6_11-VM_Test-4 success Logs for set-matrix
mdraidci/vmtest-md-6_11-VM_Test-3 success Logs for Validate matrix.py
mdraidci/vmtest-md-6_11-VM_Test-5 success Logs for x86_64-gcc / build / build for x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-6 success Logs for x86_64-gcc / build-release
mdraidci/vmtest-md-6_11-VM_Test-7 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-8 fail Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-9 fail Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-10 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-12 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-14 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
mdraidci/vmtest-md-6_11-VM_Test-11 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-15 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
mdraidci/vmtest-md-6_11-VM_Test-13 fail Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
mdraidci/vmtest-md-6_11-VM_Test-16 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
mdraidci/vmtest-md-6_11-VM_Test-19 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
mdraidci/vmtest-md-6_11-VM_Test-18 fail Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
mdraidci/vmtest-md-6_11-VM_Test-22 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
mdraidci/vmtest-md-6_11-VM_Test-20 success Logs for x86_64-llvm-17 / veristat
mdraidci/vmtest-md-6_11-VM_Test-24 fail Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
mdraidci/vmtest-md-6_11-VM_Test-17 fail Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
mdraidci/vmtest-md-6_11-VM_Test-21 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
mdraidci/vmtest-md-6_11-VM_Test-23 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
mdraidci/vmtest-md-6_11-VM_Test-28 success Logs for x86_64-llvm-18 / veristat
mdraidci/vmtest-md-6_11-VM_Test-25 fail Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
mdraidci/vmtest-md-6_11-VM_Test-27 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
mdraidci/vmtest-md-6_11-VM_Test-26 fail Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18

Commit Message

Ofir Gal June 6, 2024, 3:32 p.m. UTC
__write_sb_page() rounds up the io size to the optimal io size if it
doesn't exceed the data offset, but it doesn't check the final size
exceeds the bitmap length.

For example:
page count      - 1
page size       - 4K
data offset     - 1M
optimal io size - 256K

The final io size would be 256K (64 pages) but md_bitmap_storage_alloc()
allocated 1 page, the IO would write 1 valid page and 63 pages that
happens to be allocated afterwards. This leaks memory to the raid device
superblock.

This issue caused a data transfer failure in nvme-tcp. The network
drivers checks the first page of an IO with sendpage_ok(), it returns
true if the page isn't a slabpage and refcount >= 1. If the page
!sendpage_ok() the network driver disables MSG_SPLICE_PAGES.

As of now the network layer assumes all the pages of the IO are
sendpage_ok() when MSG_SPLICE_PAGES is on.

The bitmap pages aren't slab pages, the first page of the IO is
sendpage_ok(), but the additional pages that happens to be allocated
after the bitmap pages might be !sendpage_ok(). That cause
skb_splice_from_iter() to stop the data transfer, in the case below it
hangs 'mdadm --create'.

The bug is reproducible, in order to reproduce we need nvme-over-tcp
controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid
with bitmap over those devices reproduces the bug.

In order to simulate large optimal IO size you can use dm-stripe with a
single device.
Script to reproduce the issue on top of brd devices using dm-stripe is
attached below (will be added to blktest).

I have added some logs to test the theory:
...
md: created bitmap (1 pages) for device md127
__write_sb_page before md_super_write offset: 16, size: 262144. pfn: 0x53ee
=== __write_sb_page before md_super_write. logging pages ===
pfn: 0x53ee, slab: 0 <-- the only page that allocated for the bitmap
pfn: 0x53ef, slab: 1
pfn: 0x53f0, slab: 0
pfn: 0x53f1, slab: 0
pfn: 0x53f2, slab: 0
pfn: 0x53f3, slab: 1
...
nvme_tcp: sendpage_ok - pfn: 0x53ee, len: 262144, offset: 0
skbuff: before sendpage_ok() - pfn: 0x53ee
skbuff: before sendpage_ok() - pfn: 0x53ef
WARNING at net/core/skbuff.c:6848 skb_splice_from_iter+0x142/0x450
skbuff: !sendpage_ok - pfn: 0x53ef. is_slab: 1, page_count: 1
...

Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
---
 drivers/md/md-bitmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Christoph Hellwig June 7, 2024, 4:53 a.m. UTC | #1
On Thu, Jun 06, 2024 at 06:32:20PM +0300, Ofir Gal wrote:
> +	unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) << PAGE_SHIFT;

Please split the line after the "<<".

>  	loff_t sboff, offset = mddev->bitmap_info.offset;
>  	sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
>  	unsigned int size = PAGE_SIZE;
> @@ -273,7 +274,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
>  		/* DATA METADATA BITMAP - no problems */
>  	}
>  
> -	md_super_write(mddev, rdev, sboff + ps, (int) size, page);
> +	md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);

and drop the pointless case here.

With that:


Reviewed-by: Christoph Hellwig <hch@lst.de>
Ofir Gal June 7, 2024, 7:29 a.m. UTC | #2
On 07/06/2024 7:53, Christoph Hellwig wrote:
> On Thu, Jun 06, 2024 at 06:32:20PM +0300, Ofir Gal wrote:
>> +	unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) << PAGE_SHIFT;
> Please split the line after the "<<".
>
>>  	loff_t sboff, offset = mddev->bitmap_info.offset;
>>  	sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
>>  	unsigned int size = PAGE_SIZE;
>> @@ -273,7 +274,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
>>  		/* DATA METADATA BITMAP - no problems */
>>  	}
>>  
>> -	md_super_write(mddev, rdev, sboff + ps, (int) size, page);
>> +	md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
> and drop the pointless case here.
>
> With that:
>
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
Thanks for the review! applied to v2.
diff mbox series

Patch

diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 0a2d37eb38ef..3cc2d0ad6f00 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -227,6 +227,7 @@  static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
 	struct block_device *bdev;
 	struct mddev *mddev = bitmap->mddev;
 	struct bitmap_storage *store = &bitmap->storage;
+	unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) << PAGE_SHIFT;
 	loff_t sboff, offset = mddev->bitmap_info.offset;
 	sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
 	unsigned int size = PAGE_SIZE;
@@ -273,7 +274,7 @@  static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
 		/* DATA METADATA BITMAP - no problems */
 	}
 
-	md_super_write(mddev, rdev, sboff + ps, (int) size, page);
+	md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
 	return 0;
 }
 
-- 
2.45.1


Reproduce script:
 reproduce.sh | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)
 create mode 100755 reproduce.sh

diff --git a/reproduce.sh b/reproduce.sh
new file mode 100755
index 000000000..8ae226b18
--- /dev/null
+++ b/reproduce.sh
@@ -0,0 +1,114 @@ 
+#!/usr/bin/env sh
+# SPDX-License-Identifier: MIT
+
+set -e
+
+load_modules() {
+    modprobe nvme
+    modprobe nvme-tcp
+    modprobe nvmet
+    modprobe nvmet-tcp
+}
+
+setup_ns() {
+    local dev=$1
+    local num=$2
+    local port=$3
+    ls $dev > /dev/null
+
+    mkdir -p /sys/kernel/config/nvmet/subsystems/$num
+    cd /sys/kernel/config/nvmet/subsystems/$num
+    echo 1 > attr_allow_any_host
+
+    mkdir -p namespaces/$num
+    cd namespaces/$num/
+    echo $dev > device_path
+    echo 1 > enable
+
+    ln -s /sys/kernel/config/nvmet/subsystems/$num \
+        /sys/kernel/config/nvmet/ports/$port/subsystems/
+}
+
+setup_port() {
+    local num=$1
+
+    mkdir -p /sys/kernel/config/nvmet/ports/$num
+    cd /sys/kernel/config/nvmet/ports/$num
+    echo "127.0.0.1" > addr_traddr
+    echo tcp > addr_trtype
+    echo 8009 > addr_trsvcid
+    echo ipv4 > addr_adrfam
+}
+
+setup_big_opt_io() {
+    local dev=$1
+    local name=$2
+
+    # Change optimal IO size by creating dm stripe
+    dmsetup create $name --table \
+        "0 `blockdev --getsz $dev` striped 1 512 $dev 0"
+}
+
+setup_targets() {
+    # Setup ram devices instead of using real nvme devices
+    modprobe brd rd_size=1048576 rd_nr=2 # 1GiB
+
+    setup_big_opt_io /dev/ram0 ram0_big_opt_io
+    setup_big_opt_io /dev/ram1 ram1_big_opt_io
+
+    setup_port 1
+    setup_ns /dev/mapper/ram0_big_opt_io 1 1
+    setup_ns /dev/mapper/ram1_big_opt_io 2 1
+}
+
+setup_initiators() {
+    nvme connect -t tcp -n 1 -a 127.0.0.1 -s 8009
+    nvme connect -t tcp -n 2 -a 127.0.0.1 -s 8009
+}
+
+reproduce_warn() {
+    local devs=$@
+
+    # Hangs here
+    mdadm --create /dev/md/test_md --level=1 --bitmap=internal \
+        --bitmap-chunk=1024K --assume-clean --run --raid-devices=2 $devs
+}
+
+echo "###################################
+
+The script creates 2 nvme initiators in order to reproduce the bug.
+The script doesn't know which controllers it created, choose the new nvme
+controllers when asked.
+
+###################################
+
+Press enter to continue.
+"
+
+read tmp
+
+echo "# Creating 2 nvme controllers for the reproduction. current nvme devices:"
+lsblk -s | grep nvme || true
+echo "---------------------------------
+"
+
+load_modules
+setup_targets
+setup_initiators
+
+sleep 0.1 # Wait for the new nvme ctrls to show up
+
+echo "# Created 2 nvme devices. nvme devices list:"
+
+lsblk -s | grep nvme
+echo "---------------------------------
+"
+
+echo "# Insert the new nvme devices as separated lines. both should be with size of 1G"
+read dev1
+read dev2
+
+ls /dev/$dev1 > /dev/null
+ls /dev/$dev2 > /dev/null
+
+reproduce_warn /dev/$dev1 /dev/$dev2