From patchwork Thu Aug 16 17:01:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 10567885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA1AE921 for ; Thu, 16 Aug 2018 17:02:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7B4B2B4C9 for ; Thu, 16 Aug 2018 17:02:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CB1A72B4CE; Thu, 16 Aug 2018 17:02:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F6402B4C9 for ; Thu, 16 Aug 2018 17:02:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 069056B02A5; Thu, 16 Aug 2018 13:02:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 03D906B02A6; Thu, 16 Aug 2018 13:02:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E71346B02A7; Thu, 16 Aug 2018 13:02:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 9F77B6B02A5 for ; Thu, 16 Aug 2018 13:02:21 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id i68-v6so2375392pfb.9 for ; Thu, 16 Aug 2018 10:02:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :from:date:message-id:mime-version:content-transfer-encoding; bh=tFZxRL6Y42prYC+SHvFfkN2HHYAxHC1xNdz2Q920W1k=; b=a62j+/ZWhSQwHPffB1D+Co6iMELsVfmGE8b4G76/34yqB/sDNUL0UY1v0HE3Up6mr+ Lxn7JXPU/ti8AqwmMM7uu7GiYRt9UIm8nqvWEn58bognRgc8F6nVxbR3XbcXFswUwbBi YVUudurHVzvAMMtGU5KEPt14Of4m6VCMSt/H4ZMnRcqEn7Kd+GFPj/+EHJGK892H3xk4 0RoTDlIm32LCWCI+FkkdVqKEzrhWhzFEHY4H1UVLgJjLUrRZmbUqg429SyPHnUNV0+MB dfldNKmjHx7S/3HckY9WzKzdpB6krnjANSgKm9NI7NchFzwmM2V2FrcqAGLYKsPr1Teo 0B7Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Gm-Message-State: AOUpUlELVLr10mZEYmNZIOZyg4dmAUSNZ2Q/L9HLNS6HbV+868hBbVXq V94flRYWYQd5+N1zg79RrtiGIcW4QlVJnFoMu1W4yEK1UrLnx2/Mhnr5+pgNfhvdplaaewnKNwi ixKS/WlZ2+qeEMOJA893hxZNxvYUTu1PFVOkkvhf8zmcDiOT9N668Oi34I8uafKNH+w== X-Received: by 2002:a62:2f84:: with SMTP id v126-v6mr33108651pfv.115.1534438940939; Thu, 16 Aug 2018 10:02:20 -0700 (PDT) X-Google-Smtp-Source: AA+uWPx+F5eFljOg+CFDufw+0dejJATI0o/2mlaLPe4QFZ4D6NWX1VVSsXcpHomOFSGC4sNr4uzr X-Received: by 2002:a62:2f84:: with SMTP id v126-v6mr33108573pfv.115.1534438940044; Thu, 16 Aug 2018 10:02:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534438940; cv=none; d=google.com; s=arc-20160816; b=NK5tyyw99s1Pi9KHsD0wSmfXsnqnmRrClJect+8nivcftkaw+w0Na6p7q4LyDgkukC VjafG7+JwayUsHEoK8v53Azwk9Oy0rhFeWwg4VzklRIMoCvnQq//pP3fzIMZ+q2Eurr/ z3ygsqyrzVF2K5mey4QK5mMGOWkbqWqZZQ3LT0fjyxJYLa8lrVelpz1nX7IV/f9Rf72j W/SZaa4CapNvmE51oL7hOLeGhwwVgPvyD9+MTaC40m0maXZKPzYLnkSLCXvjak6HlQ8i ssyRf76NQhsRd1m8bXVCUnJIjtdA5ca8aGKGbgeswC3zDH4XSgXVVtkapBjul2cIHFt+ v34A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:from:cc:to :subject:arc-authentication-results; bh=tFZxRL6Y42prYC+SHvFfkN2HHYAxHC1xNdz2Q920W1k=; b=sroLvSrf2zmkf2x5d3pN8ojCW856+7F8RpHbOOisb0Vx/zbY/1GxexUB93W1CbRiG8 F/dLasvnrrXSCldEb5AZnEGfOCAvTYq7De0JTxhZQ2xiQdWgoDSw22eh6ovx2Zgi9y1v JeJjZOI9KNzK4UPhhBV5lOM3LJMuYYxUPJt+6IaWk9wznDpmCCrYDPeIi7U2+E9wKADv g4oMGUJpUgUEKN920kfiATOd/uVsqrXC0PTLyPzlx5WZr7u0gtqANfzJCjROoSDG6NUC j1HBjFdm1OHsvUJ5Qu9CBODQTCOODdaDUpvGgYym6Hq969lbywyjMmYVFUzANhN7BrhC 8dOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id cb1-v6si23073516plb.128.2018.08.16.10.02.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Aug 2018 10:02:20 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) client-ip=140.211.169.12; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 5DB24CD2; Thu, 16 Aug 2018 17:02:19 +0000 (UTC) Subject: Patch "x86/mm: Add TLB purge to free pmd/pte page interfaces" has been added to the 4.14-stable tree To: 20180627141348.21777-4-toshi.kani@hpe.com,akpm@linux-foundation.org,cpandya@codeaurora.org,gregkh@linuxfoundation.org,hpa@zytor.com,joro@8bytes.org,linux-arm-kernel@lists.infradead.org,linux-mm@kvack.org,mhocko@suse.com,tglx@linutronix.de,toshi.kani@hpe.com Cc: From: Date: Thu, 16 Aug 2018 19:01:21 +0200 Message-ID: <1534438881213152@kroah.com> MIME-Version: 1.0 X-stable: commit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is a note to let you know that I've just added the patch titled x86/mm: Add TLB purge to free pmd/pte page interfaces to the 4.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch and it can be found in the queue-4.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. From 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Wed, 27 Jun 2018 08:13:48 -0600 Subject: x86/mm: Add TLB purge to free pmd/pte page interfaces From: Toshi Kani commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream. ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates a pud / pmd map. The following preconditions are met at their entry. - All pte entries for a target pud/pmd address range have been cleared. - System-wide TLB purges have been peformed for a target pud/pmd address range. The preconditions assure that there is no stale TLB entry for the range. Speculation may not cache TLB entries since it requires all levels of page entries, including ptes, to have P & A-bits set for an associated address. However, speculation may cache pud/pmd entries (paging-structure caches) when they have P-bit set. Add a system-wide TLB purge (INVLPG) to a single page after clearing pud/pmd entry's P-bit. SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches, states that: INVLPG invalidates all paging-structure caches associated with the current PCID regardless of the liner addresses to which they correspond. Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces") Signed-off-by: Toshi Kani Signed-off-by: Thomas Gleixner Cc: mhocko@suse.com Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: cpandya@codeaurora.org Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: Joerg Roedel Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Michal Hocko Cc: "H. Peter Anvin" Cc: Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/pgtable.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) Patches currently in stable-queue which might be from toshi.kani@hpe.com are queue-4.14/x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch queue-4.14/ioremap-update-pgtable-free-interfaces-with-addr.patch queue-4.14/x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -718,24 +718,44 @@ int pmd_clear_huge(pmd_t *pmd) * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); - - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE))) - return 0; + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -746,7 +766,7 @@ int pud_free_pmd_page(pud_t *pud, unsign * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) @@ -758,6 +778,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsign pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1;