From patchwork Thu Aug 16 17:02:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 10567911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3B7B13B4 for ; Thu, 16 Aug 2018 17:04:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF67D2B4E3 for ; Thu, 16 Aug 2018 17:04:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D315C2B4FA; Thu, 16 Aug 2018 17:04:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 223262B4E3 for ; Thu, 16 Aug 2018 17:04:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 178786B02B7; Thu, 16 Aug 2018 13:04:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 12A936B02B8; Thu, 16 Aug 2018 13:04:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 017026B02B9; Thu, 16 Aug 2018 13:04:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id B36BD6B02B7 for ; Thu, 16 Aug 2018 13:04:11 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id 90-v6so3081787pla.18 for ; Thu, 16 Aug 2018 10:04:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :from:date:message-id:mime-version:content-transfer-encoding; bh=ocuVwJY7tel1qc+I9P4yShOlne4gkK0rsrt92RkGbK4=; b=KaVVOo2lknp4FkhWhGYFHxWW0uU/I8Oryu5XHK+/+Gui9ULj9XAMC+hy5k/oYzhfiE RQiVNUIMGDEUw1yBBvPfcdW84x9J61B29wEKhjrpNmXDbNLt5hB3lq/JBRwjoCM403bZ KYTUK5o8hziykDteuIdEHjuvtdyG6hu6pqw8ZV3eErQMmgEuJOb34vAORkZ1/rtOoHm+ l5CFX7qWzQjpRnB5stNF26vnEU97jQI2cd04zO7D2N2GX7Sg1cua/BZHARzCCC9sSpte nlQ2YW3bTUO5skYWc8/KtNvY9Njfypvpl5Pr65lqgJPVa4bR9pz21hf2FNArIWDC+Q6L QTvQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Gm-Message-State: AOUpUlE2FEP1q3/mZksZRFieIduTw5z3foCgNElyLFWA1UaONd0PK1iw r48m6X5W0JHmW+SVPem0gSV0JwK1fwFsvdfTteYwTL+IBGCtRQ3tWilBhp6kpwtwqn0ZSqm7Lg8 jwVudUw8FGOYxh4oVkoLziPvJ/ZSBX33rzG+AGSsGpX6yTPVr4agrhzuUluYqC734VQ== X-Received: by 2002:a63:8a41:: with SMTP id y62-v6mr29162212pgd.291.1534439051327; Thu, 16 Aug 2018 10:04:11 -0700 (PDT) X-Google-Smtp-Source: AA+uWPyvfWkEgdysNyIIN9E1DA7cnyNHnT+b8qtGld0D800eohCquMOZeG+uuNy7pjFy7mwOJ4X6 X-Received: by 2002:a63:8a41:: with SMTP id y62-v6mr29162159pgd.291.1534439050636; Thu, 16 Aug 2018 10:04:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534439050; cv=none; d=google.com; s=arc-20160816; b=iYAwl+E51jfbGyH98Ke4uCsLdN7LjZo9KQKdYcKs0xB0mD43rmXpO8D9aBBNkrdtG5 wSyRI0jeomNpyHvfEfUaL7YLyR1UalMXwIkCBWdRbBcvb7NeJHfFRN1zH1c/w1MPJ+Sv 3xZv3BfkvdHJRCebitBuJJbRINtZ2ZRkQV3DzmdZRNmLgi4rlCJ0jfa+JGDIbYjUoHvk HmbJ5a8EGOs+LSYVjoPlOYzzNKLeDfdCejjtEKa6RU7pxDvA+jrkyouXNdZBUF/XfSQd QN0ThzR/mzQyUPQI9tEUTYYSwleU87AJyv6/DxtBh8dwbLMslL0v5o6VpZbO6lsk5wk4 b9wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:from:cc:to :subject:arc-authentication-results; bh=ocuVwJY7tel1qc+I9P4yShOlne4gkK0rsrt92RkGbK4=; b=j94WU1en9JNxVXBUUMhawEKqFQZaHXrq3B2hoT1pwIhFn1aclmqrlEJbRRCOvnPatR zH7W/QHKqDt7DzrUVgM347L5CGglR9gcpADoxJZNGfmXCspViQhuu27aL1NVtcYw+Ddq nE/EsbbhUoc9XCI+FeWlLpGLh/2Zhx4wFpXgJ1gA0q+rX/zFOZnlcHCLKPiYHvgRGHWm /AmGKMddP4S7VTj8yFYJPj3Pcc7GKm1Tfpuj23LI/HtpGd/oWH1cMDWQYZ643iiMsqK1 OvmmufimX+bS7J6rIyRfs59yeyI3iJYAwFEkX3D66a83MownZ8ytCmoWyvi7sjnmGPkZ +ixg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id ay1-v6si22259483plb.266.2018.08.16.10.04.10 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Aug 2018 10:04:10 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) client-ip=140.211.169.12; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id EEC2ECD0; Thu, 16 Aug 2018 17:04:09 +0000 (UTC) Subject: Patch "x86/mm: Add TLB purge to free pmd/pte page interfaces" has been added to the 4.9-stable tree To: 20180627141348.21777-4-toshi.kani@hpe.com,akpm@linux-foundation.org,cpandya@codeaurora.org,gregkh@linuxfoundation.org,hpa@zytor.com,joro@8bytes.org,linux-arm-kernel@lists.infradead.org,linux-mm@kvack.org,mhocko@suse.com,tglx@linutronix.de,toshi.kani@hpe.com Cc: From: Date: Thu, 16 Aug 2018 19:02:32 +0200 Message-ID: <153443895215610@kroah.com> MIME-Version: 1.0 X-stable: commit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is a note to let you know that I've just added the patch titled x86/mm: Add TLB purge to free pmd/pte page interfaces to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. From 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Wed, 27 Jun 2018 08:13:48 -0600 Subject: x86/mm: Add TLB purge to free pmd/pte page interfaces From: Toshi Kani commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream. ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates a pud / pmd map. The following preconditions are met at their entry. - All pte entries for a target pud/pmd address range have been cleared. - System-wide TLB purges have been peformed for a target pud/pmd address range. The preconditions assure that there is no stale TLB entry for the range. Speculation may not cache TLB entries since it requires all levels of page entries, including ptes, to have P & A-bits set for an associated address. However, speculation may cache pud/pmd entries (paging-structure caches) when they have P-bit set. Add a system-wide TLB purge (INVLPG) to a single page after clearing pud/pmd entry's P-bit. SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches, states that: INVLPG invalidates all paging-structure caches associated with the current PCID regardless of the liner addresses to which they correspond. Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces") Signed-off-by: Toshi Kani Signed-off-by: Thomas Gleixner Cc: mhocko@suse.com Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: cpandya@codeaurora.org Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: Joerg Roedel Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Michal Hocko Cc: "H. Peter Anvin" Cc: Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/pgtable.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) Patches currently in stable-queue which might be from toshi.kani@hpe.com are queue-4.9/x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch queue-4.9/ioremap-update-pgtable-free-interfaces-with-addr.patch queue-4.9/x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -659,24 +659,44 @@ int pmd_clear_huge(pmd_t *pmd) * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); - - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE))) - return 0; + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -687,7 +707,7 @@ int pud_free_pmd_page(pud_t *pud, unsign * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) @@ -699,6 +719,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsign pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1;