From patchwork Wed May 4 18:28:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB4BFC433EF for ; Wed, 4 May 2022 18:29:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03A2C6B009A; Wed, 4 May 2022 14:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DA0F6B0085; Wed, 4 May 2022 14:29:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B18B6B0096; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7138B6B0095 for ; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 46CA26127B for ; Wed, 4 May 2022 18:29:06 +0000 (UTC) X-FDA: 79428897492.14.C8E49F8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 97C85C0096 for ; Wed, 4 May 2022 18:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3pwir8YI2R1oRmHK3o1ZbbZuL5pCSoelPTjYvCw/rxQ=; b=hRm9ox9sHZnS+Lw4I7WgqdgYJ5 u77ckq9tCKkR6GjxtSEIixDXSUhMAjsbiwDS8anyPHofO4RaMVAxMlZqLLUCQ4j78Reb4XG1+tcjP Qn3Cm9lx/xmWki+i8nrL3mpCtsh/G3uA5bda/804OA3qoTzeoGzvWHOuSOY3BXRYlmoqKehWct4Cg myljfBfpuB722hC92IwOZB5B2tevwMP7dUBcE94I5V6O6sgftK9yT/OSE2fbj96wa0+Xql532bT1g 4/VoyfQjE3el6+97kzfSdOCppa371vdqnTb3sGm5J3jfySrJrR6KyrDtlI5VoqT4ND/Gp7rVxxwkW z3siPs5w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkG-00Gq8P-Bi; Wed, 04 May 2022 18:29:04 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 20/26] mm/shmem: Add shmem_alloc_folio() Date: Wed, 4 May 2022 19:28:51 +0100 Message-Id: <20220504182857.4013401-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 97C85C0096 X-Stat-Signature: yu4hzw1bg7xq7or889ax4dfgytmhbtp1 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hRm9ox9s; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651688944-807200 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Call vma_alloc_folio() directly instead of alloc_page_vma(). Add a shmem_alloc_page() wrapper to avoid changing the callers. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 7946ccbc60bf..36a4d7f07e0b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1543,17 +1543,23 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return &folio->page; } -static struct page *shmem_alloc_page(gfp_t gfp, +static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct page *page; + struct folio *folio; shmem_pseudo_vma_init(&pvma, info, index); - page = alloc_page_vma(gfp, &pvma, 0); + folio = vma_alloc_folio(gfp, 0, &pvma, 0, false); shmem_pseudo_vma_destroy(&pvma); - return page; + return folio; +} + +static struct page *shmem_alloc_page(gfp_t gfp, + struct shmem_inode_info *info, pgoff_t index) +{ + return &shmem_alloc_folio(gfp, info, index)->page; } static struct page *shmem_alloc_and_acct_page(gfp_t gfp,