From patchwork Wed May 19 19:08:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0424BC433B4 for ; Wed, 19 May 2021 19:09:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D224F61355 for ; Wed, 19 May 2021 19:09:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231861AbhESTKa (ORCPT ); Wed, 19 May 2021 15:10:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231877AbhESTK3 (ORCPT ); Wed, 19 May 2021 15:10:29 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB78DC06175F for ; Wed, 19 May 2021 12:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=sEvN4wbnrNGuJC8KyW2G4QemhIYm0FE4LiU5k5c0EM8=; b=RCfGk7x8FNfpCg3S/m7AaOB4BS ug2murqwqAAkmkHh8fHWMGriKWsNS+oQOKFc8dGUIr95EBcJMoL2LHPfB5jY+DJKbHew6aFI0IHpI URB2oOwEayD6x5uU43OPiLT0Br4EnMZnRaxWkNxNTuyyju6Jt4e0IDPbneHN1UTNSDJ37BCIRZLlS s/Thwudf5s+7mErbaEM9AjVdOnOKBNt2MlFgrPe0Ow1CEK3t9dxi9B7ly0LhmQJuj/3Xk4w//GOOG ZmvCJWRjRo4xlHX1uKt7Z9c/SmpKW2/FqbzNuRjaZuLRjOQ4rvMvPIoLmFxV09aEMXldAjmp66dO9 40ESmN8Q==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZ5-00Fis8-VZ; Wed, 19 May 2021 19:09:08 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 01/11] xfs: cleanup error handling in xfs_buf_get_map Date: Wed, 19 May 2021 21:08:50 +0200 Message-Id: <20210519190900.320044-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use a single goto label for freeing the buffer and returning an error. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_buf.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 592800c8852f45..80be0333f077c0 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -721,16 +721,12 @@ xfs_buf_get_map( return error; error = xfs_buf_allocate_memory(new_bp, flags); - if (error) { - xfs_buf_free(new_bp); - return error; - } + if (error) + goto out_free_buf; error = xfs_buf_find(target, map, nmaps, flags, new_bp, &bp); - if (error) { - xfs_buf_free(new_bp); - return error; - } + if (error) + goto out_free_buf; if (bp != new_bp) xfs_buf_free(new_bp); @@ -758,6 +754,9 @@ xfs_buf_get_map( trace_xfs_buf_get(bp, flags, _RET_IP_); *bpp = bp; return 0; +out_free_buf: + xfs_buf_free(new_bp); + return error; } int From patchwork Wed May 19 19:08:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 682D2C433B4 for ; Wed, 19 May 2021 19:09:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D644613AA for ; Wed, 19 May 2021 19:09:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231877AbhESTKd (ORCPT ); Wed, 19 May 2021 15:10:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231974AbhESTKc (ORCPT ); Wed, 19 May 2021 15:10:32 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1AACC06175F for ; Wed, 19 May 2021 12:09:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=WkYqFq0gmw0ejfLAS6GN8xck2IlJQRuntWdkGwOpufQ=; b=fjB/5Fx2ETLpIi+hoe7p8GaBoV nwG3d6z1Ivl8+LWvx/Z9zty+XTklM+qHluQ2MWRWypBMDkBRiT60cbi/qNh6ETVTKPVzIATkCZkXZ YF1UCdOcdNLwx5002YpCM+zPx9FZv7LK5uo2TYU0H7O1BtTc4PQNp18YmQESBxGYdKgN4zLJi9Lgi YcNM6NtJKNGMbc8m50tD5Q76FBfKKN4kMpCAUsZAs92ehkg5QV7xtbu9F1GQy5FZuqWMuIxbH03mE n8ne87coFyIluQVev0ocpDFNX0inN6DCnLQDxVCKfxfWFtXsUiZAudewtQEQeQ+F5P+/PtR99ZdL4 5xqxaoAg==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZA-00FisI-8a; Wed, 19 May 2021 19:09:12 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 02/11] xfs: split xfs_buf_allocate_memory Date: Wed, 19 May 2021 21:08:51 +0200 Message-Id: <20210519190900.320044-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Split xfs_buf_allocate_memory into one helper that allocates from slab and one that allocates using the page allocator. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 83 +++++++++++++++++++++++++----------------------- 1 file changed, 44 insertions(+), 39 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 80be0333f077c0..ac85ec6f0a2fab 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -347,11 +347,41 @@ xfs_buf_free( kmem_cache_free(xfs_buf_zone, bp); } +static int +xfs_buf_alloc_slab( + struct xfs_buf *bp, + unsigned int flags) +{ + struct xfs_buftarg *btp = bp->b_target; + int align = xfs_buftarg_dma_alignment(btp); + size_t size = BBTOB(bp->b_length); + xfs_km_flags_t km_flags = KM_ZERO; + + if (!(flags & XBF_READ)) + km_flags |= KM_ZERO; + bp->b_addr = kmem_alloc_io(size, align, km_flags); + if (!bp->b_addr) + return -ENOMEM; + if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) != + ((unsigned long)bp->b_addr & PAGE_MASK)) { + /* b_addr spans two pages - use alloc_page instead */ + kmem_free(bp->b_addr); + bp->b_addr = NULL; + return -ENOMEM; + } + bp->b_offset = offset_in_page(bp->b_addr); + bp->b_pages = bp->b_page_array; + bp->b_pages[0] = kmem_to_page(bp->b_addr); + bp->b_page_count = 1; + bp->b_flags |= _XBF_KMEM; + return 0; +} + /* * Allocates all the pages for buffer in question and builds it's page list. */ -STATIC int -xfs_buf_allocate_memory( +static int +xfs_buf_alloc_pages( struct xfs_buf *bp, uint flags) { @@ -361,47 +391,14 @@ xfs_buf_allocate_memory( unsigned short page_count, i; xfs_off_t start, end; int error; - xfs_km_flags_t kmflag_mask = 0; /* * assure zeroed buffer for non-read cases. */ - if (!(flags & XBF_READ)) { - kmflag_mask |= KM_ZERO; + if (!(flags & XBF_READ)) gfp_mask |= __GFP_ZERO; - } - /* - * for buffers that are contained within a single page, just allocate - * the memory from the heap - there's no need for the complexity of - * page arrays to keep allocation down to order 0. - */ size = BBTOB(bp->b_length); - if (size < PAGE_SIZE) { - int align_mask = xfs_buftarg_dma_alignment(bp->b_target); - bp->b_addr = kmem_alloc_io(size, align_mask, - KM_NOFS | kmflag_mask); - if (!bp->b_addr) { - /* low memory - use alloc_page loop instead */ - goto use_alloc_page; - } - - if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) != - ((unsigned long)bp->b_addr & PAGE_MASK)) { - /* b_addr spans two pages - use alloc_page instead */ - kmem_free(bp->b_addr); - bp->b_addr = NULL; - goto use_alloc_page; - } - bp->b_offset = offset_in_page(bp->b_addr); - bp->b_pages = bp->b_page_array; - bp->b_pages[0] = kmem_to_page(bp->b_addr); - bp->b_page_count = 1; - bp->b_flags |= _XBF_KMEM; - return 0; - } - -use_alloc_page: start = BBTOB(bp->b_maps[0].bm_bn) >> PAGE_SHIFT; end = (BBTOB(bp->b_maps[0].bm_bn + bp->b_length) + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -720,9 +717,17 @@ xfs_buf_get_map( if (error) return error; - error = xfs_buf_allocate_memory(new_bp, flags); - if (error) - goto out_free_buf; + /* + * For buffers that are contained within a single page, just allocate + * the memory from the heap - there's no need for the complexity of + * page arrays to keep allocation down to order 0. + */ + if (BBTOB(new_bp->b_length) >= PAGE_SIZE || + xfs_buf_alloc_slab(new_bp, flags) < 0) { + error = xfs_buf_alloc_pages(new_bp, flags); + if (error) + goto out_free_buf; + } error = xfs_buf_find(target, map, nmaps, flags, new_bp, &bp); if (error) From patchwork Wed May 19 19:08:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC36BC433ED for ; Wed, 19 May 2021 19:09:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9CB36135A for ; Wed, 19 May 2021 19:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231980AbhESTKh (ORCPT ); Wed, 19 May 2021 15:10:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231976AbhESTKg (ORCPT ); Wed, 19 May 2021 15:10:36 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FB3EC06175F for ; Wed, 19 May 2021 12:09:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1LkpMa1xzX12C8HxeXz2GOf4uaSF7Xcc60MsdBNrh94=; b=2aYFVwzhIX36xRFY+swyy0+Dt3 mSlFsVvQlmpgqGiptfFiHWwtskF1Sk38JvC4yZCwjRwK2IBnovWx5USIAFhY24kcmtAMtX5ktzhl2 iM+A30/EgWI2WmyTBYpewiq1sFUHwHmm1S/ROfNoKpBSDXvBQbkgh3j+0BZSI6bt75+3FbtTMTsdj UfJUGDeuJAEgT8gXwZb26wUIYNlI73p0WgzVRRZcpSyUFQSNJ8gPj6GaPz50wFGaANATTtQwSudGL 0H2pfr0E5aW4D4OM4xkESdhvQs3lHs9PJ9f3hPTTg2wk+4OfIn0s7+6F9W1Y7UEV79YCc/nfFFPo2 abfz6oGw==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZD-00FisP-9x; Wed, 19 May 2021 19:09:15 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 03/11] xfs: remove ->b_offset handling for page backed buffers Date: Wed, 19 May 2021 21:08:52 +0200 Message-Id: <20210519190900.320044-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org ->b_offset can only be non-zero for SLAB backed buffers, so remove all code dealing with it for page backed buffers. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 15 +++++---------- fs/xfs/xfs_buf.h | 3 ++- 2 files changed, 7 insertions(+), 11 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index ac85ec6f0a2fab..392b85d059bff5 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -79,7 +79,7 @@ static inline int xfs_buf_vmap_len( struct xfs_buf *bp) { - return (bp->b_page_count * PAGE_SIZE) - bp->b_offset; + return (bp->b_page_count * PAGE_SIZE); } /* @@ -329,8 +329,7 @@ xfs_buf_free( uint i; if (xfs_buf_is_vmapped(bp)) - vm_unmap_ram(bp->b_addr - bp->b_offset, - bp->b_page_count); + vm_unmap_ram(bp->b_addr, bp->b_page_count); for (i = 0; i < bp->b_page_count; i++) { struct page *page = bp->b_pages[i]; @@ -386,7 +385,7 @@ xfs_buf_alloc_pages( uint flags) { size_t size; - size_t nbytes, offset; + size_t nbytes; gfp_t gfp_mask = xb_to_gfp(flags); unsigned short page_count, i; xfs_off_t start, end; @@ -407,7 +406,6 @@ xfs_buf_alloc_pages( if (unlikely(error)) return error; - offset = bp->b_offset; bp->b_flags |= _XBF_PAGES; for (i = 0; i < bp->b_page_count; i++) { @@ -441,10 +439,9 @@ xfs_buf_alloc_pages( XFS_STATS_INC(bp->b_mount, xb_page_found); - nbytes = min_t(size_t, size, PAGE_SIZE - offset); + nbytes = min_t(size_t, size, PAGE_SIZE); size -= nbytes; bp->b_pages[i] = page; - offset = 0; } return 0; @@ -466,7 +463,7 @@ _xfs_buf_map_pages( ASSERT(bp->b_flags & _XBF_PAGES); if (bp->b_page_count == 1) { /* A single page buffer is always mappable */ - bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset; + bp->b_addr = page_address(bp->b_pages[0]); } else if (flags & XBF_UNMAPPED) { bp->b_addr = NULL; } else { @@ -493,7 +490,6 @@ _xfs_buf_map_pages( if (!bp->b_addr) return -ENOMEM; - bp->b_addr += bp->b_offset; } return 0; @@ -1726,7 +1722,6 @@ xfs_buf_offset( if (bp->b_addr) return bp->b_addr + offset; - offset += bp->b_offset; page = bp->b_pages[offset >> PAGE_SHIFT]; return page_address(page) + (offset & (PAGE_SIZE-1)); } diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h index 459ca34f26f588..21b4c58fd2fa87 100644 --- a/fs/xfs/xfs_buf.h +++ b/fs/xfs/xfs_buf.h @@ -167,7 +167,8 @@ struct xfs_buf { atomic_t b_pin_count; /* pin count */ atomic_t b_io_remaining; /* #outstanding I/O requests */ unsigned int b_page_count; /* size of page array */ - unsigned int b_offset; /* page offset in first page */ + unsigned int b_offset; /* page offset in first page, + only used for SLAB buffers */ int b_error; /* error code on I/O */ /* From patchwork Wed May 19 19:08:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A615C433ED for ; Wed, 19 May 2021 19:09:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 49A7B61355 for ; Wed, 19 May 2021 19:09:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231977AbhESTKk (ORCPT ); Wed, 19 May 2021 15:10:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231975AbhESTKj (ORCPT ); Wed, 19 May 2021 15:10:39 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B91D4C06175F for ; Wed, 19 May 2021 12:09:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc: To:From:Sender:Reply-To:Content-ID:Content-Description; bh=Vx1d4we5CodHEVSGLT8g1Cm7n2EXhXGiSuFPoWP5QZE=; b=eti2Xl7G8EZIiWR6Yy9G5hnWh0 fRzAXVBH3orACo0dcEse8zjJcqpMlH/ckZUvULE0h1z+GtQ64mecwseWycoyelbWqCVfcnLT4Hmto 2KPrS2FbbUriAbPFOux1UH0RSURPdbO4NTca6BMLUQ5erJbIetRr0o0jduu8jQJCwgB/74rKXehup ddaTIi+Wk99sUP7mMXUcB9QEtquzxXCHTscMlpKhJX/O2NIrP384kNrAySMTQtAWCxoAZWWAK8rEr /vyqM9evLR0M9dIR/YuK+gr1Ffs9JjJRh5RwVUsyAMRx3K8Au8t+hiXhWGSTBxLCPWVQSONTYZW8o To3Ty+vQ==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZH-00FisV-0A; Wed, 19 May 2021 19:09:19 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 04/11] xfs: cleanup _xfs_buf_get_pages Date: Wed, 19 May 2021 21:08:53 +0200 Message-Id: <20210519190900.320044-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Remove the check for an existing b_pages array as this function is always called right after allocating a buffer, so this can't happen. Also use kmem_zalloc to allocate the page array instead of doing a manual memset gіven that the inline array is already pre-zeroed as part of the freshly allocated buffer anyway. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 392b85d059bff5..9c64c374411081 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -281,19 +281,18 @@ _xfs_buf_get_pages( struct xfs_buf *bp, int page_count) { - /* Make sure that we have a page list */ - if (bp->b_pages == NULL) { - bp->b_page_count = page_count; - if (page_count <= XB_PAGES) { - bp->b_pages = bp->b_page_array; - } else { - bp->b_pages = kmem_alloc(sizeof(struct page *) * - page_count, KM_NOFS); - if (bp->b_pages == NULL) - return -ENOMEM; - } - memset(bp->b_pages, 0, sizeof(struct page *) * page_count); + ASSERT(bp->b_pages == NULL); + + bp->b_page_count = page_count; + if (page_count > XB_PAGES) { + bp->b_pages = kmem_zalloc(sizeof(struct page *) * page_count, + KM_NOFS); + if (!bp->b_pages) + return -ENOMEM; + } else { + bp->b_pages = bp->b_page_array; } + return 0; } From patchwork Wed May 19 19:08:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BDC5C433B4 for ; Wed, 19 May 2021 19:09:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1BCE61355 for ; Wed, 19 May 2021 19:09:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231975AbhESTKm (ORCPT ); Wed, 19 May 2021 15:10:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231974AbhESTKm (ORCPT ); Wed, 19 May 2021 15:10:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DCEBC06175F for ; Wed, 19 May 2021 12:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=A6dlVzYJIeYGYyCnq1KE9flTvZ5HhXDH6pfk/XKu0BQ=; b=kZ4nLRrCKeE8qDx/4W7Y4nf7Bn BsUVAVHXL/NEL9EIBq0cZbQNqBaeHw2c/80sgo1Heunjr+75F4BlgQwBJbSOn9TYCWtDn/rh3cKqI 8UtqqiW2PQIYirprboR1gJvXkv0Qnv62ZjSgs8Pvv1UeRkLSpqp2Rt0QWqKZfFt3h9Jnei6FHpu4V BcQL6jUYqgyr8Lq2/+g0A/AI+iV6b48OsxY7SRUBWOyjO7CmU/fSPTXm4opWlR3AnIJI6tiIpwQF2 Uf1Zo6ni3MJjVFEGRuZTg1KssoqCqhviAmXocwWJHKOmXiPeKc9Nxz5UuNb9/zXNwTDKUbInDPEc3 xXFQwM2A==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZJ-00Fisb-Rn; Wed, 19 May 2021 19:09:22 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 05/11] xfs: remove the xb_page_found stat counter in xfs_buf_alloc_pages Date: Wed, 19 May 2021 21:08:54 +0200 Message-Id: <20210519190900.320044-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We did not find any page, we're allocating them all from the page allocator. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 9c64c374411081..76240d84d58b61 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -436,8 +436,6 @@ xfs_buf_alloc_pages( goto retry; } - XFS_STATS_INC(bp->b_mount, xb_page_found); - nbytes = min_t(size_t, size, PAGE_SIZE); size -= nbytes; bp->b_pages[i] = page; From patchwork Wed May 19 19:08:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D28CAC433B4 for ; Wed, 19 May 2021 19:09:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AEC4E61355 for ; Wed, 19 May 2021 19:09:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231976AbhESTKq (ORCPT ); Wed, 19 May 2021 15:10:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231819AbhESTKq (ORCPT ); Wed, 19 May 2021 15:10:46 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D89FC06175F for ; Wed, 19 May 2021 12:09:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ztLngLnonem8HyO2pemD9+ceJ55Ddo3sC/yMHr8/BVI=; b=bL9Lr8Rg/DMFGeIa0+cFUuSGka By8Ubr5joA8mj9kQSOk+51fcXWzZTD99Y0KbAEqw0MkwnTFbMAf1wCEdBQAXuRw/5F7iuQLuCGmDQ 8niLL08apH+6n2HjuHBwlhDBb+fsnTIbqF380AP8A+MTzkjSktoPRd7hUk9AMk+T2VnA4j4voR7Su aoZYVJNy/A74IVnlSe9HDBnTbij/rG+tEHglbm2hQ4K6jKm2mDz2rrI8XWf9UsdZ9dC3iXO5ecuXc K+nTR9NCb+pH2f3zrHAPCFa47a0HKfga789aaSaoO3a9Jseg3xAdWlDS7qGosChEzgC67llM06exb bUBrJYBg==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZN-00Fish-LM; Wed, 19 May 2021 19:09:26 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 06/11] xfs: remove the size and nbytes variables in xfs_buf_alloc_pages Date: Wed, 19 May 2021 21:08:55 +0200 Message-Id: <20210519190900.320044-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org These variables are not used for anything but recursively updating each other. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 76240d84d58b61..08c8667e6027fc 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -383,8 +383,6 @@ xfs_buf_alloc_pages( struct xfs_buf *bp, uint flags) { - size_t size; - size_t nbytes; gfp_t gfp_mask = xb_to_gfp(flags); unsigned short page_count, i; xfs_off_t start, end; @@ -396,7 +394,6 @@ xfs_buf_alloc_pages( if (!(flags & XBF_READ)) gfp_mask |= __GFP_ZERO; - size = BBTOB(bp->b_length); start = BBTOB(bp->b_maps[0].bm_bn) >> PAGE_SHIFT; end = (BBTOB(bp->b_maps[0].bm_bn + bp->b_length) + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -436,8 +433,6 @@ xfs_buf_alloc_pages( goto retry; } - nbytes = min_t(size_t, size, PAGE_SIZE); - size -= nbytes; bp->b_pages[i] = page; } return 0; From patchwork Wed May 19 19:08:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C034DC433ED for ; Wed, 19 May 2021 19:09:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D1846135A for ; Wed, 19 May 2021 19:09:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231974AbhESTKx (ORCPT ); Wed, 19 May 2021 15:10:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbhESTKu (ORCPT ); Wed, 19 May 2021 15:10:50 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79CF7C06175F for ; Wed, 19 May 2021 12:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=/cy5eWWQIvdyzbB/mYkgT4/vz3KYZuZf7V0KB18mFG4=; b=iDIvvTz43J2K7TMtHgItob0w/r opTDBzEDkNihf8MtmXnOUhLYy3jscvkUvBbSJmMSbkv7KAB7HybX3U+uX4EKvGQsIYaeZM73WQW+4 eM2TQk60kQMUf0//mfZ9H7qpnVArAO7X9dgNRM4YvpBjwQ5BWMiPxPCsAgN7qNAK0C4pHp8fdp6Gu uzEBLzNEPrnECv1JCBOOvigK4L9wVWJvclOGouJu4/dqoHLBmqGInwA15DSWbgYlZH9wQvPqsZntr 5JH0aeJ8Q9Moi5V4PnK3l12KSyPwXv5kpNAGSth76P+dTKlQR5IJRvKCnNj3xFUlx20uAlobKjWOf a2E1QjbQ==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZQ-00Fisn-Or; Wed, 19 May 2021 19:09:29 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 07/11] xfs: simplify the b_page_count calculation Date: Wed, 19 May 2021 21:08:56 +0200 Message-Id: <20210519190900.320044-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Ever since we stopped using the Linux page cache to back XFS buffes there is no need to take the start sector into account for calculating the number of pages in a buffer, as the data always start from the beginning of the buffer. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 08c8667e6027fc..76a107e3cb2a22 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -278,15 +278,14 @@ _xfs_buf_alloc( */ STATIC int _xfs_buf_get_pages( - struct xfs_buf *bp, - int page_count) + struct xfs_buf *bp) { ASSERT(bp->b_pages == NULL); - bp->b_page_count = page_count; - if (page_count > XB_PAGES) { - bp->b_pages = kmem_zalloc(sizeof(struct page *) * page_count, - KM_NOFS); + bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); + if (bp->b_page_count > XB_PAGES) { + bp->b_pages = kmem_zalloc(sizeof(struct page *) * + bp->b_page_count, KM_NOFS); if (!bp->b_pages) return -ENOMEM; } else { @@ -384,8 +383,7 @@ xfs_buf_alloc_pages( uint flags) { gfp_t gfp_mask = xb_to_gfp(flags); - unsigned short page_count, i; - xfs_off_t start, end; + unsigned short i; int error; /* @@ -394,11 +392,7 @@ xfs_buf_alloc_pages( if (!(flags & XBF_READ)) gfp_mask |= __GFP_ZERO; - start = BBTOB(bp->b_maps[0].bm_bn) >> PAGE_SHIFT; - end = (BBTOB(bp->b_maps[0].bm_bn + bp->b_length) + PAGE_SIZE - 1) - >> PAGE_SHIFT; - page_count = end - start; - error = _xfs_buf_get_pages(bp, page_count); + error = _xfs_buf_get_pages(bp); if (unlikely(error)) return error; @@ -942,7 +936,6 @@ xfs_buf_get_uncached( int flags, struct xfs_buf **bpp) { - unsigned long page_count; int error, i; struct xfs_buf *bp; DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks); @@ -954,12 +947,11 @@ xfs_buf_get_uncached( if (error) goto fail; - page_count = PAGE_ALIGN(numblks << BBSHIFT) >> PAGE_SHIFT; - error = _xfs_buf_get_pages(bp, page_count); + error = _xfs_buf_get_pages(bp); if (error) goto fail_free_buf; - for (i = 0; i < page_count; i++) { + for (i = 0; i < bp->b_page_count; i++) { bp->b_pages[i] = alloc_page(xb_to_gfp(flags)); if (!bp->b_pages[i]) { error = -ENOMEM; From patchwork Wed May 19 19:08:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD95C433B4 for ; Wed, 19 May 2021 19:09:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E56361355 for ; Wed, 19 May 2021 19:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231892AbhESTKy (ORCPT ); Wed, 19 May 2021 15:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231981AbhESTKx (ORCPT ); Wed, 19 May 2021 15:10:53 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C80CC06175F for ; Wed, 19 May 2021 12:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=aT6cEXjVz0BBfBXhwMAOcY5zpXE5UC7tygl5deeVlvU=; b=h+kALjxN4ehhDMWhH1zkfwNq26 uSYx9z66f7KQ9MW6oIB2Jr84hH+ydOQ7q8QAgD+o+7G8kJDwefi4Pn5+dKfa8Y+d6k0faBGMhj+CN qQojoOxr4xWRAq4SYrEbzP7P7ZqD+Um8RL7EOrWThjACK2BI8he4uFjHIc7QY8kKjaaRw/OGkoNfm NJyAWG1//R4kGgDTq1JYFGrtrywRYHeuyYw2wlfHA8FZlrC1Chi/NXaSVSKlN0KJgjhl6gorSd5z3 3e+8USM6El40GUH98hIlMGL+1lr9zGrOwzQRzBf02cMVkeUngTQ26z2pyQGQK2ZbhBFg1Gnk9HNr9 pV+4omSw==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZT-00Fit5-H1; Wed, 19 May 2021 19:09:32 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 08/11] xfs: centralize page allocation and freeing for buffers Date: Wed, 19 May 2021 21:08:57 +0200 Message-Id: <20210519190900.320044-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Factor out two helpers that do everything needed for allocating and freeing pages that back a buffer, and remove the duplication between the different interfaces. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 110 ++++++++++++++++------------------------------- 1 file changed, 37 insertions(+), 73 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 76a107e3cb2a22..31aff8323605cd 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -273,35 +273,17 @@ _xfs_buf_alloc( } /* - * Allocate a page array capable of holding a specified number - * of pages, and point the page buf at it. + * Free all pages allocated to the buffer including the page map. */ -STATIC int -_xfs_buf_get_pages( - struct xfs_buf *bp) +static void +xfs_buf_free_pages( + struct xfs_buf *bp) { - ASSERT(bp->b_pages == NULL); - - bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); - if (bp->b_page_count > XB_PAGES) { - bp->b_pages = kmem_zalloc(sizeof(struct page *) * - bp->b_page_count, KM_NOFS); - if (!bp->b_pages) - return -ENOMEM; - } else { - bp->b_pages = bp->b_page_array; - } + unsigned int i; - return 0; -} + for (i = 0; i < bp->b_page_count; i++) + __free_page(bp->b_pages[i]); -/* - * Frees b_pages if it was allocated. - */ -STATIC void -_xfs_buf_free_pages( - struct xfs_buf *bp) -{ if (bp->b_pages != bp->b_page_array) { kmem_free(bp->b_pages); bp->b_pages = NULL; @@ -324,22 +306,14 @@ xfs_buf_free( ASSERT(list_empty(&bp->b_lru)); if (bp->b_flags & _XBF_PAGES) { - uint i; - if (xfs_buf_is_vmapped(bp)) vm_unmap_ram(bp->b_addr, bp->b_page_count); - - for (i = 0; i < bp->b_page_count; i++) { - struct page *page = bp->b_pages[i]; - - __free_page(page); - } + xfs_buf_free_pages(bp); if (current->reclaim_state) current->reclaim_state->reclaimed_slab += bp->b_page_count; } else if (bp->b_flags & _XBF_KMEM) kmem_free(bp->b_addr); - _xfs_buf_free_pages(bp); xfs_buf_free_maps(bp); kmem_cache_free(xfs_buf_zone, bp); } @@ -380,34 +354,33 @@ xfs_buf_alloc_slab( static int xfs_buf_alloc_pages( struct xfs_buf *bp, - uint flags) + gfp_t gfp_mask, + bool fail_fast) { - gfp_t gfp_mask = xb_to_gfp(flags); - unsigned short i; - int error; - - /* - * assure zeroed buffer for non-read cases. - */ - if (!(flags & XBF_READ)) - gfp_mask |= __GFP_ZERO; + int i; - error = _xfs_buf_get_pages(bp); - if (unlikely(error)) - return error; + ASSERT(bp->b_pages == NULL); - bp->b_flags |= _XBF_PAGES; + bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); + if (bp->b_page_count > XB_PAGES) { + bp->b_pages = kmem_zalloc(sizeof(struct page *) * + bp->b_page_count, KM_NOFS); + if (!bp->b_pages) + return -ENOMEM; + } else { + bp->b_pages = bp->b_page_array; + } for (i = 0; i < bp->b_page_count; i++) { struct page *page; uint retries = 0; retry: page = alloc_page(gfp_mask); - if (unlikely(page == NULL)) { - if (flags & XBF_READ_AHEAD) { + if (unlikely(!page)) { + if (fail_fast) { bp->b_page_count = i; - error = -ENOMEM; - goto out_free_pages; + xfs_buf_free_pages(bp); + return -ENOMEM; } /* @@ -429,13 +402,9 @@ xfs_buf_alloc_pages( bp->b_pages[i] = page; } - return 0; -out_free_pages: - for (i = 0; i < bp->b_page_count; i++) - __free_page(bp->b_pages[i]); - bp->b_flags &= ~_XBF_PAGES; - return error; + bp->b_flags |= _XBF_PAGES; + return 0; } /* @@ -706,7 +675,13 @@ xfs_buf_get_map( */ if (BBTOB(new_bp->b_length) >= PAGE_SIZE || xfs_buf_alloc_slab(new_bp, flags) < 0) { - error = xfs_buf_alloc_pages(new_bp, flags); + gfp_t gfp_mask = xb_to_gfp(flags); + + /* assure a zeroed buffer for non-read cases */ + if (!(flags & XBF_READ)) + gfp_mask |= __GFP_ZERO; + error = xfs_buf_alloc_pages(new_bp, gfp_mask, + flags & XBF_READ_AHEAD); if (error) goto out_free_buf; } @@ -936,7 +911,7 @@ xfs_buf_get_uncached( int flags, struct xfs_buf **bpp) { - int error, i; + int error; struct xfs_buf *bp; DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks); @@ -947,19 +922,10 @@ xfs_buf_get_uncached( if (error) goto fail; - error = _xfs_buf_get_pages(bp); + error = xfs_buf_alloc_pages(bp, xb_to_gfp(flags), true); if (error) goto fail_free_buf; - for (i = 0; i < bp->b_page_count; i++) { - bp->b_pages[i] = alloc_page(xb_to_gfp(flags)); - if (!bp->b_pages[i]) { - error = -ENOMEM; - goto fail_free_mem; - } - } - bp->b_flags |= _XBF_PAGES; - error = _xfs_buf_map_pages(bp, 0); if (unlikely(error)) { xfs_warn(target->bt_mount, @@ -972,9 +938,7 @@ xfs_buf_get_uncached( return 0; fail_free_mem: - while (--i >= 0) - __free_page(bp->b_pages[i]); - _xfs_buf_free_pages(bp); + xfs_buf_free_pages(bp); fail_free_buf: xfs_buf_free_maps(bp); kmem_cache_free(xfs_buf_zone, bp); From patchwork Wed May 19 19:08:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1DF4C433ED for ; Wed, 19 May 2021 19:09:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D24816135C for ; Wed, 19 May 2021 19:09:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231898AbhESTK4 (ORCPT ); Wed, 19 May 2021 15:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231894AbhESTKz (ORCPT ); Wed, 19 May 2021 15:10:55 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A56B8C06175F for ; Wed, 19 May 2021 12:09:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=AQG7hdc76wBJojhmWFQ1bYS2kaeFVVO1wHJ31xKHIwY=; b=K2waGnlS9xauZsOMAyE7IiSgiH GWPvlWKUeGTU9WpMDMpkAGEjh2O/9z8y0kSSVP3fCLLrqVoFNyMyQ8SV2OwbI6Zzv0SN9uwsIFe15 +GjK0B3jGAMbpK46UGhev9qG0p5QNDxLUTgRFOHmzJBxJpYkPg4uTKObzKz+uqCiJlyBnYRfsgZAf 5I6Chkem78oIhsqS82jvKaSB5p394p17LK4qxZgMiRlUBwDNwL0gnRiXDkNqIewfRzb86+Pni7ROF dJtIDjnUsLmxuZ6bXGkzQByPNsNFF6Z+et4NQZftsgYjYn0cRHID5DCvh2WdEZY7KXN1nGSBhkgrm ZaTeUXVg==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZW-00FitC-Uo; Wed, 19 May 2021 19:09:35 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 09/11] xfs: lift the buffer zeroing logic into xfs_buf_alloc_pages Date: Wed, 19 May 2021 21:08:58 +0200 Message-Id: <20210519190900.320044-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Lift the buffer zeroing logic from xfs_buf_get_map into so that it also covers uncached buffers, and remove the now obsolete manual zeroing in the only direct caller of xfs_buf_get_uncached. Signed-off-by: Christoph Hellwig --- fs/xfs/libxfs/xfs_ag.c | 1 - fs/xfs/xfs_buf.c | 24 +++++++++++++----------- 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ag.c b/fs/xfs/libxfs/xfs_ag.c index c68a3668847499..be0087825ae06b 100644 --- a/fs/xfs/libxfs/xfs_ag.c +++ b/fs/xfs/libxfs/xfs_ag.c @@ -43,7 +43,6 @@ xfs_get_aghdr_buf( if (error) return error; - xfs_buf_zero(bp, 0, BBTOB(bp->b_length)); bp->b_bn = blkno; bp->b_maps[0].bm_bn = blkno; bp->b_ops = ops; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 31aff8323605cd..b3519a43759235 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -22,9 +22,6 @@ static kmem_zone_t *xfs_buf_zone; -#define xb_to_gfp(flags) \ - ((((flags) & XBF_READ_AHEAD) ? __GFP_NORETRY : GFP_NOFS) | __GFP_NOWARN) - /* * Locking orders * @@ -354,11 +351,21 @@ xfs_buf_alloc_slab( static int xfs_buf_alloc_pages( struct xfs_buf *bp, - gfp_t gfp_mask, + xfs_buf_flags_t flags, bool fail_fast) { + gfp_t gfp_mask = __GFP_NOWARN; int i; + if (flags & XBF_READ_AHEAD) + gfp_mask |= __GFP_NORETRY; + else + gfp_mask |= GFP_NOFS; + + /* assure a zeroed buffer for non-read cases */ + if (!(flags & XBF_READ)) + gfp_mask |= __GFP_ZERO; + ASSERT(bp->b_pages == NULL); bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); @@ -675,12 +682,7 @@ xfs_buf_get_map( */ if (BBTOB(new_bp->b_length) >= PAGE_SIZE || xfs_buf_alloc_slab(new_bp, flags) < 0) { - gfp_t gfp_mask = xb_to_gfp(flags); - - /* assure a zeroed buffer for non-read cases */ - if (!(flags & XBF_READ)) - gfp_mask |= __GFP_ZERO; - error = xfs_buf_alloc_pages(new_bp, gfp_mask, + error = xfs_buf_alloc_pages(new_bp, flags, flags & XBF_READ_AHEAD); if (error) goto out_free_buf; @@ -922,7 +924,7 @@ xfs_buf_get_uncached( if (error) goto fail; - error = xfs_buf_alloc_pages(bp, xb_to_gfp(flags), true); + error = xfs_buf_alloc_pages(bp, flags, true); if (error) goto fail_free_buf; From patchwork Wed May 19 19:08:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 441DFC433B4 for ; Wed, 19 May 2021 19:09:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A4256135A for ; Wed, 19 May 2021 19:09:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231901AbhESTK7 (ORCPT ); Wed, 19 May 2021 15:10:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231819AbhESTK6 (ORCPT ); Wed, 19 May 2021 15:10:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F660C06175F for ; Wed, 19 May 2021 12:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YYqdk4TJiCGpYCEkD18hlIlqWLuyEy1tUHnX7ECwKaQ=; b=X9+41M1hP+rdYm6GSYiGW7Wnrh xBBJOmghzkqapRSZjSOdirurbHf8yVg/HLcNNuWcfDXFk+IatDIjJN7w1aN7ThFWEdUmNsuOsAJJ0 LDy5EVfMHW5tyUBU1aTfS97YDSIb3weW9xLirwtqUR7cTNsMmi2oICXnoLIa/uzYWWep49sr/CpcD Q4Vu0mTIhOhSuOl/svhNgVPmjZ63QCrbHX6m+O1Ku7PiencNLpYf1jyd52fWOxJ9NIwgJCxMYvsE6 m47b764eb66fVw/e+4xyGsDI1k0dnM3ZBuifdypCEuoK2SUwYW1snGWddcdtNMGl4tKSvfFxljXeN K9lWHdtA==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZZ-00FitK-UC; Wed, 19 May 2021 19:09:38 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner Subject: [PATCH 10/11] xfs: retry allocations from xfs_buf_get_uncached as well Date: Wed, 19 May 2021 21:08:59 +0200 Message-Id: <20210519190900.320044-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org There is no good reason why xfs_buf_get_uncached should fail on the first allocation failure, so make it behave the same as the normal xfs_buf_get_map path. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index b3519a43759235..a1295b5b6f0ca6 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -351,8 +351,7 @@ xfs_buf_alloc_slab( static int xfs_buf_alloc_pages( struct xfs_buf *bp, - xfs_buf_flags_t flags, - bool fail_fast) + xfs_buf_flags_t flags) { gfp_t gfp_mask = __GFP_NOWARN; int i; @@ -384,7 +383,7 @@ xfs_buf_alloc_pages( retry: page = alloc_page(gfp_mask); if (unlikely(!page)) { - if (fail_fast) { + if (flags & XBF_READ_AHEAD) { bp->b_page_count = i; xfs_buf_free_pages(bp); return -ENOMEM; @@ -682,8 +681,7 @@ xfs_buf_get_map( */ if (BBTOB(new_bp->b_length) >= PAGE_SIZE || xfs_buf_alloc_slab(new_bp, flags) < 0) { - error = xfs_buf_alloc_pages(new_bp, flags, - flags & XBF_READ_AHEAD); + error = xfs_buf_alloc_pages(new_bp, flags); if (error) goto out_free_buf; } @@ -924,7 +922,7 @@ xfs_buf_get_uncached( if (error) goto fail; - error = xfs_buf_alloc_pages(bp, flags, true); + error = xfs_buf_alloc_pages(bp, flags); if (error) goto fail_free_buf; From patchwork Wed May 19 19:09:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12268247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E96C2C43460 for ; Wed, 19 May 2021 19:09:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C830A6135A for ; Wed, 19 May 2021 19:09:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231921AbhESTLC (ORCPT ); Wed, 19 May 2021 15:11:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbhESTLB (ORCPT ); Wed, 19 May 2021 15:11:01 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6581C06175F for ; Wed, 19 May 2021 12:09:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=rP0yISESwpOMZftAxd0WcFDRmBIV6ch7cmoIGzQmN6A=; b=HQ4zLQMDsOVplz9qZkZ+ksezo/ TtKOTFa8Y0f+dTMJnoDKEQIRRB+zn424s8XYIZqYrLNjI9Z7/3pI8f88XQDRp0TyqpVWv3BDPFBJ5 Qt4v9VZiuAi8+kDddFbBeOhQlGRQ3eHMB7sCgXooklif2HI4Lmol1h+3/lpuH6OOBcVc2uK4eN0pu iS4s/U23NxQ9/TYKDHjJsqFeadjTex9XZRPu0hSwQFCzZqlwy5kgi1zAa1auKYs2letkY/jjJqfkr DAAz2bG4tvnlP/PFO1GWlhaWp8X3ryJV7WGqk5w+p8QQ12oWW6d2as8neIoFJMt3yU6Cz9eGSK8vr bXmCNSuA==; Received: from [2001:4bb8:180:5add:9e44:3522:a0e8:f6e] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ljRZc-00Fitt-U9; Wed, 19 May 2021 19:09:41 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: Dave Chinner , Dave Chinner Subject: [PATCH 11/11] xfs: use alloc_pages_bulk_array() for buffers Date: Wed, 19 May 2021 21:09:00 +0200 Message-Id: <20210519190900.320044-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210519190900.320044-1-hch@lst.de> References: <20210519190900.320044-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Dave Chinner Because it's more efficient than allocating pages one at a time in a loop. Signed-off-by: Dave Chinner [hch: rebased ontop of a bunch of cleanups] Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_buf.c | 39 +++++++++++++++------------------------ 1 file changed, 15 insertions(+), 24 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index a1295b5b6f0ca6..e2439503fc13bb 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -354,7 +354,7 @@ xfs_buf_alloc_pages( xfs_buf_flags_t flags) { gfp_t gfp_mask = __GFP_NOWARN; - int i; + unsigned long filled = 0; if (flags & XBF_READ_AHEAD) gfp_mask |= __GFP_NORETRY; @@ -377,36 +377,27 @@ xfs_buf_alloc_pages( bp->b_pages = bp->b_page_array; } - for (i = 0; i < bp->b_page_count; i++) { - struct page *page; - uint retries = 0; -retry: - page = alloc_page(gfp_mask); - if (unlikely(!page)) { + /* + * Bulk filling of pages can take multiple calls. Not filling the entire + * array is not an allocation failure, so don't back off if we get at + * least one extra page. + */ + for (;;) { + unsigned long last = filled; + + filled = alloc_pages_bulk_array(gfp_mask, bp->b_page_count, + bp->b_pages); + if (filled == bp->b_page_count) + break; + if (filled == last) { if (flags & XBF_READ_AHEAD) { - bp->b_page_count = i; + bp->b_page_count = filled; xfs_buf_free_pages(bp); return -ENOMEM; } - - /* - * This could deadlock. - * - * But until all the XFS lowlevel code is revamped to - * handle buffer allocation failures we can't do much. - */ - if (!(++retries % 100)) - xfs_err(NULL, - "%s(%u) possible memory allocation deadlock in %s (mode:0x%x)", - current->comm, current->pid, - __func__, gfp_mask); - XFS_STATS_INC(bp->b_mount, xb_page_retries); congestion_wait(BLK_RW_ASYNC, HZ/50); - goto retry; } - - bp->b_pages[i] = page; } bp->b_flags |= _XBF_PAGES;