From patchwork Thu Aug 16 17:02:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 10567907 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A16B1921 for ; Thu, 16 Aug 2018 17:04:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8DB172B4E3 for ; Thu, 16 Aug 2018 17:04:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 81B592B4FA; Thu, 16 Aug 2018 17:04:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D32812B4E3 for ; Thu, 16 Aug 2018 17:04:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EF086B02B3; Thu, 16 Aug 2018 13:04:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 372B66B02B4; Thu, 16 Aug 2018 13:04:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2636D6B02B5; Thu, 16 Aug 2018 13:04:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id D56566B02B3 for ; Thu, 16 Aug 2018 13:04:01 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id j1-v6so3076152pld.23 for ; Thu, 16 Aug 2018 10:04:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :from:date:message-id:mime-version:content-transfer-encoding; bh=JNs5LnQtPb0KFOv31jGVX16weJ0Dc1ooUFCg9pdBxIY=; b=nfdjb/vsu7E5Wv7Zm5wHyrmnn8BVjMPOO5BP3AlC9M2FT0MZbMWusX9DwQf8nT24Aq lqSVupYmdWXUFcekU9iCf+9g6tReQUVyUjMdlLbrdZPZt/3SmmG7JYK08uH0hzdiVggi Q2uf2pMpjq0tX4bWwpEowkKBcgpLxE5FUnhy2Jt6HytpFBJMDcML2S7qPQfCmpLz+Lxo k9qbEYlQaQ+K/zaa6LDbeemag4J3nWgutS5ME2f6AZWgsVmi6yoVAEs4V/ygThjRv/V3 pHQ27hQgquTkeH0lnse4lGtQ024p5OWMw9MirZkiSDoxhcfV5VLimkRkL6HTH8Ashpvt ws/A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Gm-Message-State: AOUpUlG8cqmYWSpO/zChxNlCp3fSsroz2VMiY+41LkgiC4IobKoKoZpm 5qOvOr9fyoVb2hw683VRI+iwxYGKSzJ1Eo6b3PQzRYv1NNt6KEFmPwfd8GF021e99EqsabXaL+B 4mZqkYet3a1SpMQhnlXZEVMFpzTbL6axmVCHywGTWL4aOF8hDm/WIWubqgdGx0b/Ljw== X-Received: by 2002:a17:902:b486:: with SMTP id y6-v6mr29648337plr.27.1534439041540; Thu, 16 Aug 2018 10:04:01 -0700 (PDT) X-Google-Smtp-Source: AA+uWPwLg+nojFYkjgyf2QtZdG+3/xJZhMqXVI606YAVIgQqj7j8z53TIPZ0/3Ehd4XkDe4BtCB2 X-Received: by 2002:a17:902:b486:: with SMTP id y6-v6mr29648290plr.27.1534439040851; Thu, 16 Aug 2018 10:04:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534439040; cv=none; d=google.com; s=arc-20160816; b=UF0Hj76rRgpfL31hwt2YU5TASZTWBaqcVM/JTwXn0dbW0jj2GUXi1K7z4yc5YtrDKi VkDtO+fIor9n2QwoqH88AdYoK3mzN/k5HuhF1ZyK0RXmCNIcwscPUHYQIfS0KQHYAsUk d1pkAGWrhfNUKW3sFt7Rhl9k/DE+KdPUDP/B9HY67TmChmzuI7nm1XR0jlq6zFkycKqQ 7YwHH/YEqNdcpNhzJo+1z3DxuFHscXjMuN2J1FanW4ZP2MTtMvdBU8Tg3yxmPIwBuEBe 9gbDRDjIKTom0juI5sed4O2qA3wMeUBFRewISgK+vxOSw5zYOZKnGT8rq2J3iq/N52ll 8n2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:from:cc:to :subject:arc-authentication-results; bh=JNs5LnQtPb0KFOv31jGVX16weJ0Dc1ooUFCg9pdBxIY=; b=NSSLIeqI/omd7XszJD5k9hM3fJ4TGbzcYuihPm95kIBRoqOambCZfENHO3fP8P/Z2r QZywNpyKYlYVlvDbyQbfyApQIm1kXmHbpeWOMcXjnF9yIRRi96/JFLdGZflP0EklHfeE vYKqvze+MDKQYuiACL6SPhp4Dwgw5s9EbqOCCDi6d04ALM69gr71auLRMtEbnDrb6NBz TjmcSxTOwq6lVOwTE3shL5DK/FH9mJcu7cSM3ukqMncnF0ru5sfbdmcsl1EGrqMoKQIm +NeVXN5c5oaGqL+88sEboyg4ODJbz4HHNzxFkhr7t9xGMNbPXLf9Sc+7vtm8TDO7vnAF O0Ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id f7-v6si22192845pgp.496.2018.08.16.10.04.00 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Aug 2018 10:04:00 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) client-ip=140.211.169.12; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 140.211.169.12 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 2C3EDCDD; Thu, 16 Aug 2018 17:04:00 +0000 (UTC) Subject: Patch "x86/mm: Add TLB purge to free pmd/pte page interfaces" has been added to the 4.4-stable tree To: 20180627141348.21777-4-toshi.kani@hpe.com,akpm@linux-foundation.org,cpandya@codeaurora.org,gregkh@linuxfoundation.org,hpa@zytor.com,joro@8bytes.org,linux-arm-kernel@lists.infradead.org,linux-mm@kvack.org,mhocko@suse.com,tglx@linutronix.de,toshi.kani@hpe.com Cc: From: Date: Thu, 16 Aug 2018 19:02:15 +0200 Message-ID: <1534438935108145@kroah.com> MIME-Version: 1.0 X-stable: commit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is a note to let you know that I've just added the patch titled x86/mm: Add TLB purge to free pmd/pte page interfaces to the 4.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch and it can be found in the queue-4.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. From 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Wed, 27 Jun 2018 08:13:48 -0600 Subject: x86/mm: Add TLB purge to free pmd/pte page interfaces From: Toshi Kani commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream. ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates a pud / pmd map. The following preconditions are met at their entry. - All pte entries for a target pud/pmd address range have been cleared. - System-wide TLB purges have been peformed for a target pud/pmd address range. The preconditions assure that there is no stale TLB entry for the range. Speculation may not cache TLB entries since it requires all levels of page entries, including ptes, to have P & A-bits set for an associated address. However, speculation may cache pud/pmd entries (paging-structure caches) when they have P-bit set. Add a system-wide TLB purge (INVLPG) to a single page after clearing pud/pmd entry's P-bit. SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches, states that: INVLPG invalidates all paging-structure caches associated with the current PCID regardless of the liner addresses to which they correspond. Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces") Signed-off-by: Toshi Kani Signed-off-by: Thomas Gleixner Cc: mhocko@suse.com Cc: akpm@linux-foundation.org Cc: hpa@zytor.com Cc: cpandya@codeaurora.org Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: Joerg Roedel Cc: stable@vger.kernel.org Cc: Andrew Morton Cc: Michal Hocko Cc: "H. Peter Anvin" Cc: Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/pgtable.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) Patches currently in stable-queue which might be from toshi.kani@hpe.com are queue-4.4/x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch queue-4.4/ioremap-update-pgtable-free-interfaces-with-addr.patch queue-4.4/x86-mm-add-tlb-purge-to-free-pmd-pte-page-interfaces.patch --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -682,24 +682,44 @@ int pmd_clear_huge(pmd_t *pmd) * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * - * Context: The pud range has been unmaped and TLB purged. + * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. + * + * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd; + pmd_t *pmd, *pmd_sv; + pte_t *pte; int i; if (pud_none(*pud)) return 1; pmd = (pmd_t *)pud_page_vaddr(*pud); - - for (i = 0; i < PTRS_PER_PMD; i++) - if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE))) - return 0; + pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); + if (!pmd_sv) + return 0; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd_sv[i] = pmd[i]; + if (!pmd_none(pmd[i])) + pmd_clear(&pmd[i]); + } pud_clear(pud); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_none(pmd_sv[i])) { + pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); + free_page((unsigned long)pte); + } + } + + free_page((unsigned long)pmd_sv); free_page((unsigned long)pmd); return 1; @@ -710,7 +730,7 @@ int pud_free_pmd_page(pud_t *pud, unsign * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * - * Context: The pmd range has been unmaped and TLB purged. + * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) @@ -722,6 +742,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsign pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); + + /* INVLPG to clear all paging-structure caches */ + flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); + free_page((unsigned long)pte); return 1;