From patchwork Wed Jul 31 15:45:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11068517 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52D651395 for ; Wed, 31 Jul 2019 15:48:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D933200E7 for ; Wed, 31 Jul 2019 15:48:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31BC6204C1; Wed, 31 Jul 2019 15:48:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BE711200E7 for ; Wed, 31 Jul 2019 15:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DupiOjVkYAaHACuBydgOgHRyX0J2MvzRsfrrJPK4AMQ=; b=Gmcjzm4o4KYmWo isfRYUlDDHgBs+S31MJQWjjYaE6gIpn6fvEKlNV4E4/G+ZbU65XgbrqT7LO0c7RvQAKsrsPRMgJfC 3kvrzT8l1bn/z9RDvpznkCvfqm69z2YsML2fYLBwG3cD97kcQ0JbvrEi5YXfp6mahutXtA2fJF7lo 7p2mfCUClPhu4NrjykOUeRivgkqxYPnozTnvgPLcpoPAylLJZqEBQDA4+mQHRFDoU0OSmhqcEQQrB ULVPB9eYZVsFVJAxvtmXSkfIK3TjZZupMupPRvtGkGILWwYPD64oqZAq3BBVPxvIc2Cx6Wf8biZX/ sW+2t9tBl34Pup88+hgA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hsqqJ-0008Nz-5r; Wed, 31 Jul 2019 15:48:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hsqoP-0005t2-2j for linux-arm-kernel@lists.infradead.org; Wed, 31 Jul 2019 15:46:46 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B25621570; Wed, 31 Jul 2019 08:46:44 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 283DB3F694; Wed, 31 Jul 2019 08:46:42 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v10 11/22] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Wed, 31 Jul 2019 16:45:52 +0100 Message-Id: <20190731154603.41797-12-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190731154603.41797-1-steven.price@arm.com> References: <20190731154603.41797-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190731_084645_276264_EF0C04F3 X-CRM114-Status: GOOD ( 15.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price --- include/linux/mm.h | 18 ++++++++++++------ mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 28 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..0a1e83c4e267 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1432,15 +1432,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. - * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D entry + * @pud_entry: if set, called for each non-empty PUD entry + * @pmd_entry: if set, called for each non-empty PMD entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to * split_huge_page() instead of handling it explicitly. - * @pte_entry: if set, called for each non-empty PTE (4th-level) entry + * @pte_entry: if set, called for each non-empty PTE (lowest-level) entry * @pte_hole: if set, called for each hole at all levels * @hugetlb_entry: if set, called for each hugetlb entry * @test_walk: caller specific callback function to determine whether @@ -1453,8 +1452,15 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * @private: private data for callbacks' usage * * (see the comment on walk_page_range() for more details) + * + * p?d_entry callbacks are called even if those levels are folded on a partial + * architecture/configuration. */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break;