diff mbox series

btrfs: Fix slab-use-after-free Read in add_ra_bio_pages

Message ID 20240710-bug11-v1-1-aa02297fbbc9@gmail.com (mailing list archive)
State New, archived
Headers show
Series btrfs: Fix slab-use-after-free Read in add_ra_bio_pages | expand

Commit Message

Pei Li July 11, 2024, 3:42 a.m. UTC
We are accessing the start and len field in em after it is free'd.

This patch stores the values that we are going to access from em before
it was free'd so we won't access free'd memory.

Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
Signed-off-by: Pei Li <peili.dev@gmail.com>
---
Syzbot reported the following error:
BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529

This is because we are reading the values from em right after freeing it
before through free_extent_map(em).

This patch stores the values that we are going to access from em before
it was free'd so we won't access free'd memory.
---
 fs/btrfs/compression.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)


---
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
change-id: 20240710-bug11-a8ac18afb724

Best regards,

Comments

Qu Wenruo July 11, 2024, 3:56 a.m. UTC | #1
在 2024/7/11 13:12, Pei Li 写道:
> We are accessing the start and len field in em after it is free'd.
>
> This patch stores the values that we are going to access from em before
> it was free'd so we won't access free'd memory.
>
> Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
> Signed-off-by: Pei Li <peili.dev@gmail.com>

And missing the fixes tag:

Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")

And it is better to Cc to stable kernel for v5.16 and newer kernels.

> ---
> Syzbot reported the following error:
> BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
>
> This is because we are reading the values from em right after freeing it
> before through free_extent_map(em).
>
> This patch stores the values that we are going to access from em before
> it was free'd so we won't access free'd memory.

Indeed it's a use after free.

But the only usage is in this line:

	add_size = min(em->start + em->len, page_end + 1) - cur;

So why not just move that line before "free_extent_map(em);"?
It should be much easier to read than the current change.

Thanks,
Qu
> ---
>   fs/btrfs/compression.c | 10 ++++++++--
>   1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 6441e47d8a5e..42b528aee63b 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -449,6 +449,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   		u64 page_end;
>   		u64 pg_index = cur >> PAGE_SHIFT;
>   		u32 add_size;
> +		u64 start = 0, len = 0;
>
>   		if (pg_index > end_index)
>   			break;
> @@ -500,12 +501,17 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   		em = lookup_extent_mapping(em_tree, cur, page_end + 1 - cur);
>   		read_unlock(&em_tree->lock);
>
> +		if (em) {
> +			start = em->start;
> +			len = em->len;
> +		}
> +
>   		/*
>   		 * At this point, we have a locked page in the page cache for
>   		 * these bytes in the file.  But, we have to make sure they map
>   		 * to this compressed extent on disk.
>   		 */
> -		if (!em || cur < em->start ||
> +		if (!em || cur < start ||
>   		    (cur + fs_info->sectorsize > extent_map_end(em)) ||
>   		    (em->block_start >> SECTOR_SHIFT) != orig_bio->bi_iter.bi_sector) {
>   			free_extent_map(em);
> @@ -526,7 +532,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>   			}
>   		}
>
> -		add_size = min(em->start + em->len, page_end + 1) - cur;
> +		add_size = min(start + len, page_end + 1) - cur;
>   		ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
>   		if (ret != add_size) {
>   			unlock_extent(tree, cur, page_end, NULL);
>
> ---
> base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
> change-id: 20240710-bug11-a8ac18afb724
>
> Best regards,
diff mbox series

Patch

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 6441e47d8a5e..42b528aee63b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -449,6 +449,7 @@  static noinline int add_ra_bio_pages(struct inode *inode,
 		u64 page_end;
 		u64 pg_index = cur >> PAGE_SHIFT;
 		u32 add_size;
+		u64 start = 0, len = 0;
 
 		if (pg_index > end_index)
 			break;
@@ -500,12 +501,17 @@  static noinline int add_ra_bio_pages(struct inode *inode,
 		em = lookup_extent_mapping(em_tree, cur, page_end + 1 - cur);
 		read_unlock(&em_tree->lock);
 
+		if (em) {
+			start = em->start;
+			len = em->len;
+		}
+
 		/*
 		 * At this point, we have a locked page in the page cache for
 		 * these bytes in the file.  But, we have to make sure they map
 		 * to this compressed extent on disk.
 		 */
-		if (!em || cur < em->start ||
+		if (!em || cur < start ||
 		    (cur + fs_info->sectorsize > extent_map_end(em)) ||
 		    (em->block_start >> SECTOR_SHIFT) != orig_bio->bi_iter.bi_sector) {
 			free_extent_map(em);
@@ -526,7 +532,7 @@  static noinline int add_ra_bio_pages(struct inode *inode,
 			}
 		}
 
-		add_size = min(em->start + em->len, page_end + 1) - cur;
+		add_size = min(start + len, page_end + 1) - cur;
 		ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
 		if (ret != add_size) {
 			unlock_extent(tree, cur, page_end, NULL);