From patchwork Wed Feb 5 15:09:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13961289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B477C02192 for ; Wed, 5 Feb 2025 15:11:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1AC028000A; Wed, 5 Feb 2025 10:11:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5609280001; Wed, 5 Feb 2025 10:11:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A589F28000A; Wed, 5 Feb 2025 10:11:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 82E93280001 for ; Wed, 5 Feb 2025 10:11:02 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3BC0C1A01F4 for ; Wed, 5 Feb 2025 15:11:02 +0000 (UTC) X-FDA: 83086228764.16.84C38AE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 2683B1C0004 for ; Wed, 5 Feb 2025 15:10:58 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738768259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YB+/8+HBhXzlf8dkhPGjGHvbFRBfNv5iEpapquTV+08=; b=vNUT7Hfj44LZkFppjDxAQtDbvANmElXbxrQ2g2rZcBmQb4p2ko/rkaAYWOSlxOvvG2h6uc kzkdp/Kzk0lEIzAlFMUod+0MbWIoFFhs5r0jpWm2vKkax9SZU8NbSQnd97qwIXkhhLnUht 8KCUAtgAvB4SGsYMv8cRwOk1GrmX5rE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738768259; a=rsa-sha256; cv=none; b=BD2ofb+hza96Q9Nj839RyUjPao5AgnEvzm6nIBvLCqYDiTU+jDXabYaVmSfq8dQRlh5w3M usOTxKBG4TW9A3HkJpf7S88ETJWGS6zK1HPYb+TEu2/lSrL4FO/0DOnUIWmtepLo+KuzrW Zvjl7B9YBSMs/uwXNr3ZVgjXM822xhg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D45141063; Wed, 5 Feb 2025 07:11:21 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9E5CF3F5A1; Wed, 5 Feb 2025 07:10:55 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Muchun Song , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mark Rutland , Ard Biesheuvel , Anshuman Khandual , Dev Jain , Alexandre Ghiti , Steve Capper , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 11/16] mm/vmalloc: Gracefully unmap huge ptes Date: Wed, 5 Feb 2025 15:09:51 +0000 Message-ID: <20250205151003.88959-12-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205151003.88959-1-ryan.roberts@arm.com> References: <20250205151003.88959-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2683B1C0004 X-Stat-Signature: fgsm7gt3ngk4bazmmjojiqumb47njb7g X-Rspam-User: X-HE-Tag: 1738768258-733097 X-HE-Meta: U2FsdGVkX18LZfanISF2swQqeTvCQCu3reEHs/mMrBC9vD6oCE9MTlsYl91uCfxslHmJWcPQRJAyu3M/iXmkTdd3KvzMBi8Y3ghtaAv8WVufgdxB8tvtCp+6yV/4OhKwdOVsnlOK4yXPCE6JGUXl8l5KylrEBsbltU8/cY4Vi67mg+1jwQYNDiQfYqixKcVL3eh9bxTsRVoy52sogoXEfKLPtv1zPl0jM25wS/w6U3eFUXBmY62M4LMVDv84zZ1ege9N1wgjqRl6NpMfT8q01yfbDe6ZU7ymxY5tszz/GFhUPNdcM8/6vWMscZXSsFD4yLb5DI6uh4X4NG9YAKtXfB4Wl3QrSg6Y3mfBO56iFy9xeG2KIVS8Ec93F/V0R9ABr+Eyo7sUp4CcTVgQTn77j+4xYGd6jSd9Crmn/0vSX26w0vk6sfTWhOtnguHWH01OHeUJnGxs7um0VBniWIorSuCYpKBZt7EmqsgH0EFMhhagH9VJLQ0AVxUf51YoVQ8JTGRdAD6LYXNQiPTEiLpvQa6FVj2inFu/2SKXuhE90wTD78oGvNHJe6ZH1KbNBVI0H7fAi8NxOxRm0iJd87xR6eR8D61YQ1H8ohjm2IvNGWofdMtQ/EVnLUfZyGK+wiPf1HbDiPS017Z2fGeR0HtcvGole8rPK9bMomrM7Hs3pVXQT82auz5n9b7YpGs5hH6IWVg/3hTmDJHFtz1wv+H7XSiel66hwO9Myrv3PulrQKuu5JyD+qRbg6MnV7xuqpiiEeMjMekQWmilnpCKjZu/2QrOwZnTCs0jAT6kf7FOhXqW2qsFkx6TTG946k8UXtPGN7SN4ptg2zlDAvCpzH4gzsxUbkzklmL/QVROBg6MsA/4kcbWH7VqC2Bx7TeWLFXM7HgeTUFfjXYHtZ3HrrcLRH9x/5LK8/p6el/CjWf8RfV8JBcN5ZeBy2+8GKxBkZuhwLIqsqT1J5i6Z9dqF8j BKArbiTX ioRZ0RRrncCTTdENRaG80nu7QhBuX0g776SWjnCwDZ+Dt9oxY6wLb8RoQfgdDV8ea3XaGg0wtCAz36CJcwaGWxDF7Sa0nMuX58147BoL2pjDfr+xNKT0oV7dV4RuEI0ORgPJxb2Kck5jnfrdToxutx8p6hybliA9QYMxIRtijJq7RMz3RItsYm9LK0t8cQA55Ft1A X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte level in vmap") added its support by reusing the set_huge_pte_at() API, which is otherwise only used for user mappings. But when unmapping those huge ptes, it continued to call ptep_get_and_clear(), which is a layering violation. To date, the only arch to implement this support is powerpc and it all happens to work ok for it. But arm64's implementation of ptep_get_and_clear() can not be safely used to clear a previous set_huge_pte_at(). So let's introduce a new arch opt-in function, arch_vmap_pte_range_unmap_size(), which can provide the size of a (present) pte. Then we can call huge_ptep_get_and_clear() to tear it down properly. Note that if vunmap_range() is called with a range that starts in the middle of a huge pte-mapped page, we must unmap the entire huge page so the behaviour is consistent with pmd and pud block mappings. In this case emit a warning just like we do for pmd/pud mappings. Signed-off-by: Ryan Roberts --- include/linux/vmalloc.h | 8 ++++++++ mm/vmalloc.c | 18 ++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 31e9ffd936e3..16dd4cba64f2 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -113,6 +113,14 @@ static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, uns } #endif +#ifndef arch_vmap_pte_range_unmap_size +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr, + pte_t *ptep) +{ + return PAGE_SIZE; +} +#endif + #ifndef arch_vmap_pte_supported_shift static inline int arch_vmap_pte_supported_shift(unsigned long size) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index fcdf67d5177a..6111ce900ec4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -350,12 +350,26 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) { pte_t *pte; + pte_t ptent; + unsigned long size = PAGE_SIZE; pte = pte_offset_kernel(pmd, addr); do { - pte_t ptent = ptep_get_and_clear(&init_mm, addr, pte); +#ifdef CONFIG_HUGETLB_PAGE + size = arch_vmap_pte_range_unmap_size(addr, pte); + if (size != PAGE_SIZE) { + if (WARN_ON(!IS_ALIGNED(addr, size))) { + addr = ALIGN_DOWN(addr, size); + pte = PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT)); + } + ptent = huge_ptep_get_and_clear(&init_mm, addr, pte, size); + if (WARN_ON(end - addr < size)) + size = end - addr; + } else +#endif + ptent = ptep_get_and_clear(&init_mm, addr, pte); WARN_ON(!pte_none(ptent) && !pte_present(ptent)); - } while (pte++, addr += PAGE_SIZE, addr != end); + } while (pte += (size >> PAGE_SHIFT), addr += size, addr != end); *mask |= PGTBL_PTE_MODIFIED; }