From patchwork Wed May 4 18:28:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12838354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D5AC433FE for ; Wed, 4 May 2022 18:29:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DFCC6B007D; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 873E06B0073; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 393B66B0073; Wed, 4 May 2022 14:29:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0D5926B0074 for ; Wed, 4 May 2022 14:29:04 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id E140D121073 for ; Wed, 4 May 2022 18:29:03 +0000 (UTC) X-FDA: 79428897366.31.353092C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 5125D18009C for ; Wed, 4 May 2022 18:29:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ghuyNPaRYcVBS9VmCa9+k4jYQUr1mhpvqirNEm+8qiE=; b=myME5ngsVUlZmcOvGw7VUJjlhs 0ZWX28hzEQMvgD4bYeX7HyavTC2jGUXIc/nTZ7fOnWblEbFRxrbSOT27x4if3ZGbxKwO5O1DYQ2M/ 0etizNQrDdQNadOZ3sa9Ml53tdSXSce+uYyIfo/j5zsm7XIf4s32bdCIbHFkWVoZHyh04vdHsrphB 4BKOyCluntwHod4trHOaj9bJr0gmyA38ciMtAmliynPhLc3SKfnsMccbHG98C9KyQ5JubBTrTYc1c LgUFC29KdfokiiMnZly62GKywIJjBqWDiEztDs59xtsF4OnFTypRvcAOyRUti2LZLuRcsWtrAt+Xl mHdeAZQQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkD-00Gq5v-Nc; Wed, 04 May 2022 18:29:01 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 01/26] shmem: Convert shmem_alloc_hugepage() to use vma_alloc_folio() Date: Wed, 4 May 2022 19:28:32 +0100 Message-Id: <20220504182857.4013401-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5125D18009C X-Stat-Signature: 86c5c6xxfnmxmgexponqzjowydoodud9 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=myME5ngs; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688941-557011 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For now, return the head page of the folio, but remove use of the old alloc_pages_vma() API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4b2fea33158e..c89394221a7e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1527,7 +1527,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, struct vm_area_struct pvma; struct address_space *mapping = info->vfs_inode.i_mapping; pgoff_t hindex; - struct page *page; + struct folio *folio; hindex = round_down(index, HPAGE_PMD_NR); if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, @@ -1535,13 +1535,11 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return NULL; shmem_pseudo_vma_init(&pvma, info, hindex); - page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); + folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); shmem_pseudo_vma_destroy(&pvma); - if (page) - prep_transhuge_page(page); - else + if (!folio) count_vm_event(THP_FILE_FALLBACK); - return page; + return &folio->page; } static struct page *shmem_alloc_page(gfp_t gfp,