diff mbox series

[04/11] xfs: cleanup _xfs_buf_get_pages

Message ID 20210519190900.320044-5-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [01/11] xfs: cleanup error handling in xfs_buf_get_map | expand

Commit Message

Christoph Hellwig May 19, 2021, 7:08 p.m. UTC
Remove the check for an existing b_pages array as this function is always
called right after allocating a buffer, so this can't happen.  Also
use kmem_zalloc to allocate the page array instead of doing a manual
memset gіven that the inline array is already pre-zeroed as part of the
freshly allocated buffer anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_buf.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

Comments

Dave Chinner May 19, 2021, 10:40 p.m. UTC | #1
On Wed, May 19, 2021 at 09:08:53PM +0200, Christoph Hellwig wrote:
> Remove the check for an existing b_pages array as this function is always
> called right after allocating a buffer, so this can't happen.  Also
> use kmem_zalloc to allocate the page array instead of doing a manual
> memset gіven that the inline array is already pre-zeroed as part of the
> freshly allocated buffer anyway.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/xfs/xfs_buf.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 392b85d059bff5..9c64c374411081 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -281,19 +281,18 @@ _xfs_buf_get_pages(
>  	struct xfs_buf		*bp,
>  	int			page_count)
>  {
> -	/* Make sure that we have a page list */
> -	if (bp->b_pages == NULL) {
> -		bp->b_page_count = page_count;
> -		if (page_count <= XB_PAGES) {
> -			bp->b_pages = bp->b_page_array;
> -		} else {
> -			bp->b_pages = kmem_alloc(sizeof(struct page *) *
> -						 page_count, KM_NOFS);
> -			if (bp->b_pages == NULL)
> -				return -ENOMEM;
> -		}
> -		memset(bp->b_pages, 0, sizeof(struct page *) * page_count);
> +	ASSERT(bp->b_pages == NULL);
> +
> +	bp->b_page_count = page_count;
> +	if (page_count > XB_PAGES) {
> +		bp->b_pages = kmem_zalloc(sizeof(struct page *) * page_count,
> +					  KM_NOFS);
> +		if (!bp->b_pages)
> +			return -ENOMEM;
> +	} else {
> +		bp->b_pages = bp->b_page_array;
>  	}
> +
>  	return 0;
>  }

This will not apply (and break) the bulk alloc patch I sent out - we
have to ensure that the b_pages array is always zeroed before we
call the bulk alloc function, hence I moved the memset() in this
function to be unconditional. I almost cleaned up this function in
that patchset....

Just doing this:

	bp->b_page_count = page_count;
	if (page_count > XB_PAGES) {
		bp->b_pages = kmem_alloc(sizeof(struct page *) * page_count,
					 KM_NOFS);
		if (!bp->b_pages)
			return -ENOMEM;
	} else {
		bp->b_pages = bp->b_page_array;
	}
	memset(bp->b_pages, 0, sizeof(struct page *) * page_count);

	return 0;

will make it work fine with bulk alloc.

Cheers,

Dave.
Christoph Hellwig May 20, 2021, 5:23 a.m. UTC | #2
On Thu, May 20, 2021 at 08:40:28AM +1000, Dave Chinner wrote:
> This will not apply (and break) the bulk alloc patch I sent out - we
> have to ensure that the b_pages array is always zeroed before we
> call the bulk alloc function, hence I moved the memset() in this
> function to be unconditional. I almost cleaned up this function in
> that patchset....

The buffer is freshly allocated here using kmem_cache_zalloc, so
b_pages can't be set, b_page_array is already zeroed from
kmem_cache_zalloc, and the separate b_pages allocation is swithced
to use kmem_zalloc.  I thought the commit log covers this, but maybe
I need to improve it?
Dave Chinner May 25, 2021, 10:43 p.m. UTC | #3
On Thu, May 20, 2021 at 07:23:35AM +0200, Christoph Hellwig wrote:
> On Thu, May 20, 2021 at 08:40:28AM +1000, Dave Chinner wrote:
> > This will not apply (and break) the bulk alloc patch I sent out - we
> > have to ensure that the b_pages array is always zeroed before we
> > call the bulk alloc function, hence I moved the memset() in this
> > function to be unconditional. I almost cleaned up this function in
> > that patchset....
> 
> The buffer is freshly allocated here using kmem_cache_zalloc, so
> b_pages can't be set, b_page_array is already zeroed from
> kmem_cache_zalloc, and the separate b_pages allocation is swithced
> to use kmem_zalloc.  I thought the commit log covers this, but maybe
> I need to improve it?

I think I'm still living in the past a bit, where the page array in
an active uncached buffer could change via the old "associate
memory" interface. We still actually have that interface in
userspace, but we don't have anything in the kernel that uses it any
more.

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 392b85d059bff5..9c64c374411081 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -281,19 +281,18 @@  _xfs_buf_get_pages(
 	struct xfs_buf		*bp,
 	int			page_count)
 {
-	/* Make sure that we have a page list */
-	if (bp->b_pages == NULL) {
-		bp->b_page_count = page_count;
-		if (page_count <= XB_PAGES) {
-			bp->b_pages = bp->b_page_array;
-		} else {
-			bp->b_pages = kmem_alloc(sizeof(struct page *) *
-						 page_count, KM_NOFS);
-			if (bp->b_pages == NULL)
-				return -ENOMEM;
-		}
-		memset(bp->b_pages, 0, sizeof(struct page *) * page_count);
+	ASSERT(bp->b_pages == NULL);
+
+	bp->b_page_count = page_count;
+	if (page_count > XB_PAGES) {
+		bp->b_pages = kmem_zalloc(sizeof(struct page *) * page_count,
+					  KM_NOFS);
+		if (!bp->b_pages)
+			return -ENOMEM;
+	} else {
+		bp->b_pages = bp->b_page_array;
 	}
+
 	return 0;
 }