diff mbox series

[01/10] xfs: zero inode fork buffer at allocation

Message ID 20220503221728.185449-2-david@fromorbit.com (mailing list archive)
State Accepted
Headers show
Series xfs: intent whiteouts | expand

Commit Message

Dave Chinner May 3, 2022, 10:17 p.m. UTC
From: Dave Chinner <dchinner@redhat.com>

When we first allocate or resize an inline inode fork, we round up
the allocation to 4 byte alingment to make journal alignment
constraints. We don't clear the unused bytes, so we can copy up to
three uninitialised bytes into the journal. Zero those bytes so we
only ever copy zeros into the journal.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/libxfs/xfs_inode_fork.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Darrick J. Wong May 3, 2022, 10:41 p.m. UTC | #1
On Wed, May 04, 2022 at 08:17:19AM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When we first allocate or resize an inline inode fork, we round up
> the allocation to 4 byte alingment to make journal alignment
> constraints. We don't clear the unused bytes, so we can copy up to
> three uninitialised bytes into the journal. Zero those bytes so we
> only ever copy zeros into the journal.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>

Yup.  Thanks for the cleanup.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> ---
>  fs/xfs/libxfs/xfs_inode_fork.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c
> index 9aee4a1e2fe9..a15ff38c3d41 100644
> --- a/fs/xfs/libxfs/xfs_inode_fork.c
> +++ b/fs/xfs/libxfs/xfs_inode_fork.c
> @@ -50,8 +50,13 @@ xfs_init_local_fork(
>  		mem_size++;
>  
>  	if (size) {
> +		/*
> +		 * As we round up the allocation here, we need to ensure the
> +		 * bytes we don't copy data into are zeroed because the log
> +		 * vectors still copy them into the journal.
> +		 */
>  		real_size = roundup(mem_size, 4);
> -		ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS);
> +		ifp->if_u1.if_data = kmem_zalloc(real_size, KM_NOFS);
>  		memcpy(ifp->if_u1.if_data, data, size);
>  		if (zero_terminate)
>  			ifp->if_u1.if_data[size] = '\0';
> @@ -500,10 +505,11 @@ xfs_idata_realloc(
>  	/*
>  	 * For inline data, the underlying buffer must be a multiple of 4 bytes
>  	 * in size so that it can be logged and stay on word boundaries.
> -	 * We enforce that here.
> +	 * We enforce that here, and use __GFP_ZERO to ensure that size
> +	 * extensions always zero the unused roundup area.
>  	 */
>  	ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data, roundup(new_size, 4),
> -				      GFP_NOFS | __GFP_NOFAIL);
> +				      GFP_NOFS | __GFP_NOFAIL | __GFP_ZERO);
>  	ifp->if_bytes = new_size;
>  }
>  
> -- 
> 2.35.1
>
Allison Henderson May 3, 2022, 10:42 p.m. UTC | #2
On Wed, 2022-05-04 at 08:17 +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When we first allocate or resize an inline inode fork, we round up
> the allocation to 4 byte alingment to make journal alignment
> constraints. We don't clear the unused bytes, so we can copy up to
> three uninitialised bytes into the journal. Zero those bytes so we
> only ever copy zeros into the journal.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
Looks good
Reviewed-by: Allison Henderson <allison.henderson@oracle.com>

> ---
>  fs/xfs/libxfs/xfs_inode_fork.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_inode_fork.c
> b/fs/xfs/libxfs/xfs_inode_fork.c
> index 9aee4a1e2fe9..a15ff38c3d41 100644
> --- a/fs/xfs/libxfs/xfs_inode_fork.c
> +++ b/fs/xfs/libxfs/xfs_inode_fork.c
> @@ -50,8 +50,13 @@ xfs_init_local_fork(
>  		mem_size++;
>  
>  	if (size) {
> +		/*
> +		 * As we round up the allocation here, we need to
> ensure the
> +		 * bytes we don't copy data into are zeroed because the
> log
> +		 * vectors still copy them into the journal.
> +		 */
>  		real_size = roundup(mem_size, 4);
> -		ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS);
> +		ifp->if_u1.if_data = kmem_zalloc(real_size, KM_NOFS);
>  		memcpy(ifp->if_u1.if_data, data, size);
>  		if (zero_terminate)
>  			ifp->if_u1.if_data[size] = '\0';
> @@ -500,10 +505,11 @@ xfs_idata_realloc(
>  	/*
>  	 * For inline data, the underlying buffer must be a multiple of
> 4 bytes
>  	 * in size so that it can be logged and stay on word
> boundaries.
> -	 * We enforce that here.
> +	 * We enforce that here, and use __GFP_ZERO to ensure that size
> +	 * extensions always zero the unused roundup area.
>  	 */
>  	ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data,
> roundup(new_size, 4),
> -				      GFP_NOFS | __GFP_NOFAIL);
> +				      GFP_NOFS | __GFP_NOFAIL |
> __GFP_ZERO);
>  	ifp->if_bytes = new_size;
>  }
>
Christoph Hellwig May 10, 2022, 12:47 p.m. UTC | #3
On Wed, May 04, 2022 at 08:17:19AM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When we first allocate or resize an inline inode fork, we round up
> the allocation to 4 byte alingment to make journal alignment
> constraints. We don't clear the unused bytes, so we can copy up to
> three uninitialised bytes into the journal. Zero those bytes so we
> only ever copy zeros into the journal.

It took me a while to figure out how GFP_ZERO works for krealloc,
and it seems like kmalloc and friends always zero the entire length
of the kmem_cache, so if krealloc reuses a rounded up allocation the
padding is alredy zeroed at that point.

So this looks good to me:

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c
index 9aee4a1e2fe9..a15ff38c3d41 100644
--- a/fs/xfs/libxfs/xfs_inode_fork.c
+++ b/fs/xfs/libxfs/xfs_inode_fork.c
@@ -50,8 +50,13 @@  xfs_init_local_fork(
 		mem_size++;
 
 	if (size) {
+		/*
+		 * As we round up the allocation here, we need to ensure the
+		 * bytes we don't copy data into are zeroed because the log
+		 * vectors still copy them into the journal.
+		 */
 		real_size = roundup(mem_size, 4);
-		ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS);
+		ifp->if_u1.if_data = kmem_zalloc(real_size, KM_NOFS);
 		memcpy(ifp->if_u1.if_data, data, size);
 		if (zero_terminate)
 			ifp->if_u1.if_data[size] = '\0';
@@ -500,10 +505,11 @@  xfs_idata_realloc(
 	/*
 	 * For inline data, the underlying buffer must be a multiple of 4 bytes
 	 * in size so that it can be logged and stay on word boundaries.
-	 * We enforce that here.
+	 * We enforce that here, and use __GFP_ZERO to ensure that size
+	 * extensions always zero the unused roundup area.
 	 */
 	ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data, roundup(new_size, 4),
-				      GFP_NOFS | __GFP_NOFAIL);
+				      GFP_NOFS | __GFP_NOFAIL | __GFP_ZERO);
 	ifp->if_bytes = new_size;
 }