From patchwork Thu Aug 16 17:01:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 10567893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9280921 for ; Thu, 16 Aug 2018 17:03:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AEFA6286D9 for ; Thu, 16 Aug 2018 17:03:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A0F30286F2; Thu, 16 Aug 2018 17:03:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F28A286D9 for ; Thu, 16 Aug 2018 17:03:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56E196B02AA; Thu, 16 Aug 2018 13:03:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 51EEA6B02AB; Thu, 16 Aug 2018 13:03:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40EF96B02AC; Thu, 16 Aug 2018 13:03:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id F37806B02AA for ; Thu, 16 Aug 2018 13:03:07 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id g26-v6so2364140pfo.7 for ; Thu, 16 Aug 2018 10:03:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :from:date:message-id:mime-version:content-transfer-encoding; bh=DdFLP14dGfJHEHyxDpmG51hd2Rqex6+58XAF50gbfGc=; b=JsjaVIgfn8rUqR3Tp7CbunCpKjoeZMBPz6iiUaDMxxiJQ1cS2wXEGU7oUP4TFMGLml S9Q1AbcwyRo4qGGoBt/WMCwM1vXdOG23a9yZd9f2MtNMY2G0t7QrSrwcjSO9PCJA9MJM NLhNqTKOLc9a//dOY57jxgASCZM6ZhegSp6SFfU6cYg955Lh9417oyneP0xhpHTh80dp x7RUhK5Xl42RxH1FR4i5WgWL+qjTFsQbO9aBznAuKKAkZioExxl2WZ2L/lGX0+VaZJBL OwpXl27LXH1RRcbqVHCec86ZZ8cc4VTKuk6df2MtUREISuRour6iv+p7V78Kkll0aIt+ xQzQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Gm-Message-State: AOUpUlF6z6aFcSQ0b3G7l1l36fOg3/cnvZBHyoNldyAmjEpTlM4ek3R6 mx++K20fEwL8rIQPmnt2ziDWRO75qj5f/URvWrQxuoIbGtFzOfbA/Kjh/6wzTPGDJcNJ0QBGpRU 7T5F5gkdssHkgSKt3AMTpNaccCE/OPoQxN/g6oWaM7UkrX4bDLmEy3Phr+EPpMLjUXA== X-Received: by 2002:a62:ccd0:: with SMTP id j77-v6mr32874068pfk.22.1534438987649; Thu, 16 Aug 2018 10:03:07 -0700 (PDT) X-Google-Smtp-Source: AA+uWPyr2quugS2bymhPUb4skxbjA5X+cwOcR+ryQesXI8T0YJdn9JjY5ZElRqZPozKoNYwWbFoa X-Received: by 2002:a62:ccd0:: with SMTP id j77-v6mr32874011pfk.22.1534438986810; Thu, 16 Aug 2018 10:03:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534438986; cv=none; d=google.com; s=arc-20160816; b=IrJf3BL+mJyzEYlc0NEN0ZwV89EqvxSovmURUsRAWDLrCx12ZipTMkDPxvPJ4KVDbF wTVz1PReRFgQuPEK7RNkYJdmDySIsiTe6vs8Kfqk0beiJyqduuJqA7uEBloC5BU39zlD F71Alu1bk5SvcnHwaueh3SYq39fXMTjUmR+htKq3mqM5gWjpsy/b0Q5Tyh/XalcIjRRx ClzWD2DybXdUc3IhoCtRvwJE8cDEDiT3JxV99qtJ/7nI20A80+v1jtKsdJgZDjgTAX8f YFICkf81S65Kn5435Jo35v9/SYfJgb0F33PAokOLrtr2u1y41QgkG6ZTsj+t0IqEIcep 8cCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:from:cc:to :subject:arc-authentication-results; bh=DdFLP14dGfJHEHyxDpmG51hd2Rqex6+58XAF50gbfGc=; b=nDL5CJYNYyoB+gYKLSIpsQO8W60eEyITFNe7sezaMgOui1ujEiQW5VnAFO+GrYX795 15bZc3KYJ/P4fumwCtWR8tDlgpiMbvmOX/fALeD7b7x8mOb24gU0gJ7g+YaNKeS6V/oy ifiN3rQi7hqUzq0NAXsWkZ4Bnr117d2hZE/4OsWdJomstwAQNdaRslcXGjNWdrjK+DR5 ZcQu+cw9iIkcE8bG2uAlHj0MPP1r7ZApPpkoWdPbuI9wiKM71eCBgwLjoHVtoltJzbKr 1T2trpTkGpWozatdDG6v0boEJ7GpULz378S8L+o+rSbkTRvzzVR14YrqetX2tU5bd5GN 2tYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id u64-v6si26901402pgu.533.2018.08.16.10.03.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Aug 2018 10:03:06 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) client-ip=140.211.169.12; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1E132CD1; Thu, 16 Aug 2018 17:03:05 +0000 (UTC) Subject: Patch "x86/mm: Add TLB purge to free pmd/pte page interfaces" has been added to the 4.17-stable tree To: 20180627141348.21777-4-toshi.kani@hpe.com,akpm@linux-foundation.org,cpandya@codeaurora.org,gregkh@linuxfoundation.org,hpa@zytor.com,joro@8bytes.org,linux-arm-kernel@lists.infradead.org,linux-mm@kvack.org,mhocko@suse.com,tglx@linutronix.de,toshi.kani@hpe.com Cc: From: Date: Thu, 16 Aug 2018 19:01:39 +0200 Message-ID: <153443889914125@kroah.com> MIME-Version: 1.0 X-stable: commit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is a note to let you know that I've just added the patch titled x86/mm: Add TLB purge to free pmd/pte page interfaces to the 4.17-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch and it can be found in the queue-4.17 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. From 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Wed, 27 Jun 2018 08:13:48 -0600 Subject: x86/mm: Add TLB purge to free pmd/pte page interfaces From: Toshi Kani commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream. ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates a pud / pmd map. The following preconditions are met at their entry. - All pte entries for a target pud/pmd address range have been cleared. - System-wide TLB purges have been peformed for a target pud/pmd address range. The preconditions assure that there is no stale TLB entry for the range. Speculation may not cache TLB entries since it requires all levels of page entries, including ptes, to have P & A-bits set for an associated address. However, speculation may cache pud/pmd entries (paging-structure caches) when they have P-bit set. Add a system-wide TLB purge (INVLPG) to a single page after clearing pud/pmd entry's P-bit. SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches, states that: INVLPG invalidates all paging-structure caches associated with the current PCID regardless of the liner addresses to which they correspond. Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces") Signed-off-by: Toshi Kani Signed-off-by: Thomas Gleixner Cc: mhocko@suse.com Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: cpandya@codeaurora.org Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: Joerg Roedel Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Michal Hocko Cc: "H. Peter Anvin" Cc: Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/pgtable.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) Patches currently in stable-queue which might be from toshi.kani@hpe.com are queue-4.17/x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch queue-4.17/ioremap-update-pgtable-free-interfaces-with-addr.patch queue-4.17/x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -721,24 +721,44 @@ int pmd_clear_huge(pmd_t *pmd) * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); - - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE))) - return 0; + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -749,7 +769,7 @@ int pud_free_pmd_page(pud_t *pud, unsign * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) @@ -761,6 +781,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsign pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1;