From patchwork Thu Aug 16 17:01:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 10567901 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2615813B4 for ; Thu, 16 Aug 2018 17:03:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1167C2B4E3 for ; Thu, 16 Aug 2018 17:03:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 031B62B4FA; Thu, 16 Aug 2018 17:03:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 555AC2B4E3 for ; Thu, 16 Aug 2018 17:03:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67FF66B02AE; Thu, 16 Aug 2018 13:03:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 655F86B02AF; Thu, 16 Aug 2018 13:03:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 545816B02B0; Thu, 16 Aug 2018 13:03:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 10D6A6B02AE for ; Thu, 16 Aug 2018 13:03:56 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id i68-v6so2378357pfb.9 for ; Thu, 16 Aug 2018 10:03:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :from:date:message-id:mime-version:content-transfer-encoding; bh=PnXhfVHhzrMuzHGW1akuA4t9pWA6G+mYz7/iMOEqX6k=; b=V75qVny9iT8nQpUDyc1hQT8mtoOb9ocwZuoahKHTJfA2QkS91Pu2GkD8gOlk6GxeEt stBLkkyg6GIyyi185ilrVuvDrz4tOBIiRex8MB3Gs1JFd+4BVwIRZAX+QHBNqaB/Jr53 kZW127U4RyQ8esNhFc0bfbT+7XVlMzyeXLYiQ7dxKktWi2JOKoRvC/J5+0A0uyI5WNu5 hHUm7Z4eg4FRVuvQHhfXU7f7oaPmFs6Hoy4FKfdk/GSKEZUMiV/d+dFtnQVWjWwHfpcr H0W5aYx/MPIWBP2XUwwbcQYNSe02Va7Mwa0a9DmU9BIiKZmaogGymmnqC+QRRl8lD/E9 qKkQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Gm-Message-State: AOUpUlF6AotSS2NaP+YuW+Go/JkbxhF6B5ooCpfseR7ViOWCBGdF3kWZ 5Aak4OM5LiSshdNn36aGtFSiWcdEoaz2qgxZWcQt1tWy/PJ/vPObCSLr+trZS46MVmgR2yR222w UVC2knlpS1buDJc7c1FHhJM1rbw/lhH1F98BEP5slI1MpUxgyjFR1zZnIN8q0azdM4w== X-Received: by 2002:a62:13ca:: with SMTP id 71-v6mr33479900pft.34.1534439035728; Thu, 16 Aug 2018 10:03:55 -0700 (PDT) X-Google-Smtp-Source: AA+uWPzMq4OTZoo652V3b+fpmZZHs2QvZ3XK+iToAXJomxvDBn0yZfXDyFo0QNDci9ECie1E4iSb X-Received: by 2002:a62:13ca:: with SMTP id 71-v6mr33479843pft.34.1534439034965; Thu, 16 Aug 2018 10:03:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534439034; cv=none; d=google.com; s=arc-20160816; b=gPNF5Mn+02ap9r3TFT9wXdeXBo+B9aeHRGKzBye7IPF4c0JWXf3u2nzXsgVTGv7rrW H8B+6u3mSlgtGA3yuFBe1uFv3U7RgTFApOrBeZ1NPB3vI9mZbiGzDYuE6RQf8NP5gUYP tUB0JTcSJTXlFhs6qPb+pGwFQ1m0aqkcyWNwpS9/QobRI2bTp7MjbfzV8kA73pU0m2Xo IQBfLzi9V21hieMdhyFmLug4zHryMkl8nEzkScMMuNHVUuBzdZ0VXHhsdz0YSNmGILQZ wCf0o56Ba6vEHyem6pvnyvfqWhVu2V1pXp325bpoYzoVspqf5JUn1BzYEQS6Llk72JF+ bNjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:from:cc:to :subject:arc-authentication-results; bh=PnXhfVHhzrMuzHGW1akuA4t9pWA6G+mYz7/iMOEqX6k=; b=cOE34BqSafysrq1WcaTHotCxxV7StF9Ro8I+ayUtsElpOheGPryGRbeNgflZhQaDUf ve0Np9IM7vytpU1zsBobHHuwsVauf2bc+5n0y7cG2bVzy1/M5/61FDC9T0T04JY8V6ir Vjt/L5UJHYl+8JJ4zH4HwsELe3FpI0P0njzE2Pj9ztb8q/EEbeQxnxtZJ3zjoW3xopkd QVdcS0+Zdux+NlB6HK69lUj7fM/6Ek6QaA7Z/UhJpwxrbcyw8uI50ObF/4bGWkSMQlX8 UFyDy3bu6Uyjbr24YeotBZAiwkwpFGUw5SCBljy1p/HkJMGf8iIZbAc8HA3F5ITywa1m Sr0w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id a12-v6si2091954pgv.680.2018.08.16.10.03.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Aug 2018 10:03:54 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) client-ip=140.211.169.12; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 43642CD1; Thu, 16 Aug 2018 17:03:54 +0000 (UTC) Subject: Patch "x86/mm: Add TLB purge to free pmd/pte page interfaces" has been added to the 4.18-stable tree To: 20180627141348.21777-4-toshi.kani@hpe.com,akpm@linux-foundation.org,cpandya@codeaurora.org,gregkh@linuxfoundation.org,hpa@zytor.com,joro@8bytes.org,linux-arm-kernel@lists.infradead.org,linux-mm@kvack.org,mhocko@suse.com,tglx@linutronix.de,toshi.kani@hpe.com Cc: From: Date: Thu, 16 Aug 2018 19:01:58 +0200 Message-ID: <15344389188812@kroah.com> MIME-Version: 1.0 X-stable: commit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is a note to let you know that I've just added the patch titled x86/mm: Add TLB purge to free pmd/pte page interfaces to the 4.18-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch and it can be found in the queue-4.18 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. From 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Wed, 27 Jun 2018 08:13:48 -0600 Subject: x86/mm: Add TLB purge to free pmd/pte page interfaces From: Toshi Kani commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream. ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates a pud / pmd map. The following preconditions are met at their entry. - All pte entries for a target pud/pmd address range have been cleared. - System-wide TLB purges have been peformed for a target pud/pmd address range. The preconditions assure that there is no stale TLB entry for the range. Speculation may not cache TLB entries since it requires all levels of page entries, including ptes, to have P & A-bits set for an associated address. However, speculation may cache pud/pmd entries (paging-structure caches) when they have P-bit set. Add a system-wide TLB purge (INVLPG) to a single page after clearing pud/pmd entry's P-bit. SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches, states that: INVLPG invalidates all paging-structure caches associated with the current PCID regardless of the liner addresses to which they correspond. Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces") Signed-off-by: Toshi Kani Signed-off-by: Thomas Gleixner Cc: mhocko@suse.com Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: cpandya@codeaurora.org Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: Joerg Roedel Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Michal Hocko Cc: "H. Peter Anvin" Cc: Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/pgtable.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) Patches currently in stable-queue which might be from toshi.kani@hpe.com are queue-4.18/x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch queue-4.18/ioremap-update-pgtable-free-interfaces-with-addr.patch queue-4.18/x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -725,24 +725,44 @@ int pmd_clear_huge(pmd_t *pmd) * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); - - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE))) - return 0; + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -753,7 +773,7 @@ int pud_free_pmd_page(pud_t *pud, unsign * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) @@ -765,6 +785,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsign pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1;