From patchwork Tue Oct 23 16:31:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 10653415 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00A0114BB for ; Tue, 23 Oct 2018 16:32:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E72242A0DB for ; Tue, 23 Oct 2018 16:32:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E52DB2A0D2; Tue, 23 Oct 2018 16:32:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 114AD2A113 for ; Tue, 23 Oct 2018 16:32:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF0466B0005; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D217B6B000A; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4EB26B0005; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 71CF26B0008 for ; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id 14-v6so1181794pfk.22 for ; Tue, 23 Oct 2018 09:32:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=DMMleFws7r5EKoT5XAcMJek37aEdrkF47OXmHbdrsgE=; b=MawWGi6DVt7hFGr9cyMF6WxTWXWt2HpFf3J45sLU5NW33tKdjoJaW2ftIxrtgaS+5v rdMgDZxWpVgjufVpe+1s0HOEMj1nxzvPuEVKGGheiO4zQqL2tYKY140xLBW7FMG7IrYP qtw+8bQB1O1iJ9lp1swDrbLPH5Zssm4F2HPrPi/2BEwB7umf4000/DTlIJvgBc8GQgXr BK0BI4Y3zPfNwjqm1Xae/yjAMpECqWwlkS9wx2TofWsPkX0Vxd3UAs+ehe8Wr5St25/F kNsrwYxpRzXF6g44DbsqCfo51N9bQmL5W63Hee6+jdrtUyxhwCNiw9fP+/tTxKrOtDkL 0pCQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AGRZ1gJb8KUcqicCV3Y8wGS91iUbq2VN4CR+jNZmeQwoYvUP2HuB2rn1 qIhYauByxoGuk7SVF8VsWZ1JyO3U+IM2K/Z2iLtNNyWvp8FjY3l6kMQHqOWA/KHwTRm1DsggZTA 9zJ4JSPqcQzUy2yd7g3L/FIoGZFFnAIF5uJDm88GjRSK+RAFrGsPnTgNv7OjjsU2BoQ== X-Received: by 2002:a17:902:e089:: with SMTP id cb9-v6mr7129863plb.196.1540312329997; Tue, 23 Oct 2018 09:32:09 -0700 (PDT) X-Google-Smtp-Source: AJdET5fA4Rf0+rHLzd/jLAsu6aAexlCGW16aGVDTD3aArHd5cidVbnKKWHwUR7qhNKz0RWK8Fj6K X-Received: by 2002:a17:902:e089:: with SMTP id cb9-v6mr7129772plb.196.1540312328819; Tue, 23 Oct 2018 09:32:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540312328; cv=none; d=google.com; s=arc-20160816; b=r8z093yLRBA13XdkGrGanfcd7Zh3VgzpLu4gCAvmI3ThGCFJtsU+JOLEDQUkC3WlIs fSjLmWQSFtEcxGq1LF4w4G06E6t3Bz8DLJNCdQ3xFNk6/c6knv4QmIvORCSuINQke3iD 7Q4wHSi0vh+NqgKmvIdPUc87MCFMckEZop9Hy3C+w/8D3ethi6klpab6DWdD3RLPmYw+ rVi2tP+8Sflk1lutYrc4KZD3NCU7fTCrtCTk3hVhpop7EtSt4uXw6KukJDDoCxXC/uoP HBNHNOiLJzTADNrC6kUn+/J6kGt6W19Mij5VwxnJMBYBQoXGgxPCvnvpvRG4WgtXqWhw HWcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=DMMleFws7r5EKoT5XAcMJek37aEdrkF47OXmHbdrsgE=; b=R0sxv3sXSNHy3lpfpIET88vO6z6ZwBcNKvUjM1yARP4PqoUw0IXMM/Yy+y1KolRt4r izzitNqQC/yaxQX1g190iwLnmIiKB+cA5q6b55jws0g7AS2TRkJnr3i6Fy5CtFohhEB5 5EwNDaDomtFKbQ6i+MAISv3/aCY/N0o2qHtg9gD4b5C9/UtMNRQtcV87nmCHFnsJQGkf /FAnmlNbCJBz0f29D8JGSxHobAZkFIFEslsuCy8QToPMusm3nKfMNYgTksNLdt9I++bz S7uMF/ryTckUhcwH/Zl8viTJAbagaDrq/v4ksoakqvVo+MuKXOcsU06p7fWs2aiLMJDd CWTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id 65-v6si74381pgj.173.2018.10.23.09.32.08 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 09:32:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 134.134.136.126 as permitted sender) client-ip=134.134.136.126; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Oct 2018 09:32:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,416,1534834800"; d="scan'208";a="274917759" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 23 Oct 2018 09:32:05 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 463D667; Tue, 23 Oct 2018 19:32:04 +0300 (EEST) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org Cc: x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH 1/2] x86/mm: Move LDT remap out of KASLR region on 5-level paging Date: Tue, 23 Oct 2018 19:31:56 +0300 Message-Id: <20181023163157.41441-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181023163157.41441-1-kirill.shutemov@linux.intel.com> References: <20181023163157.41441-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On 5-level paging LDT remap area is placed in the middle of KASLR randomization region and it can overlap with direct mapping, vmalloc or vmap area. Let's move LDT just before direct mapping which makes it safe for KASLR. This also allows us to unify layout between 4- and 5-level paging. We don't touch 4 pgd slot gap just before the direct mapping reserved for a hypervisor, but move direct mapping by one slot instead. The LDT mapping is per-mm, so we cannot move it into P4D page table next to CPU_ENTRY_AREA without complicating PGD table allocation for 5-level paging. Signed-off-by: Kirill A. Shutemov Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") --- Documentation/x86/x86_64/mm.txt | 8 ++++---- arch/x86/include/asm/page_64_types.h | 12 +++++++----- arch/x86/include/asm/pgtable_64_types.h | 4 +--- arch/x86/xen/mmu_pv.c | 6 +++--- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt index 5432a96d31ff..463c48c26fb7 100644 --- a/Documentation/x86/x86_64/mm.txt +++ b/Documentation/x86/x86_64/mm.txt @@ -4,7 +4,8 @@ Virtual memory map with 4 level page tables: 0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm hole caused by [47:63] sign extension ffff800000000000 - ffff87ffffffffff (=43 bits) guard hole, reserved for hypervisor -ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory +ffff888000000000 - ffff887fffffffff (=39 bits) LDT remap for PTI +ffff888000000000 - ffffc87fffffffff (=64 TB) direct mapping of all phys. memory ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole @@ -14,7 +15,6 @@ ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB) ... unused hole ... vaddr_end for KASLR fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping -fffffe8000000000 - fffffeffffffffff (=39 bits) LDT remap for PTI ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks ... unused hole ... ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space @@ -30,8 +30,8 @@ Virtual memory map with 5 level page tables: 0000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm hole caused by [56:63] sign extension ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor -ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory -ff90000000000000 - ff9fffffffffffff (=52 bits) LDT remap for PTI +ff10000000000000 - ff10ffffffffffff (=48 bits) LDT remap for PTI +ff11000000000000 - ff90ffffffffffff (=55 bits) direct mapping of all phys. memory ffa0000000000000 - ffd1ffffffffffff (=54 bits) vmalloc/ioremap space (12800 TB) ffd2000000000000 - ffd3ffffffffffff (=49 bits) hole ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB) diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index 6afac386a434..b99d497e342d 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -33,12 +33,14 @@ /* * Set __PAGE_OFFSET to the most negative possible address + - * PGDIR_SIZE*16 (pgd slot 272). The gap is to allow a space for a - * hypervisor to fit. Choosing 16 slots here is arbitrary, but it's - * what Xen requires. + * PGDIR_SIZE*17 (pgd slot 273). + * + * The gap is to allow a space for LDT remap for PTI (1 pgd slot) and space for + * a hypervisor (16 slots). Choosing 16 slots for a hypervisor is arbitrary, + * but it's what Xen requires. */ -#define __PAGE_OFFSET_BASE_L5 _AC(0xff10000000000000, UL) -#define __PAGE_OFFSET_BASE_L4 _AC(0xffff880000000000, UL) +#define __PAGE_OFFSET_BASE_L5 _AC(0xff11000000000000, UL) +#define __PAGE_OFFSET_BASE_L4 _AC(0xffff888000000000, UL) #ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT #define __PAGE_OFFSET page_offset_base diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 04edd2d58211..84bd9bdc1987 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -111,9 +111,7 @@ extern unsigned int ptrs_per_p4d; */ #define MAXMEM (1UL << MAX_PHYSMEM_BITS) -#define LDT_PGD_ENTRY_L4 -3UL -#define LDT_PGD_ENTRY_L5 -112UL -#define LDT_PGD_ENTRY (pgtable_l5_enabled() ? LDT_PGD_ENTRY_L5 : LDT_PGD_ENTRY_L4) +#define LDT_PGD_ENTRY -240UL #define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT) #define LDT_END_ADDR (LDT_BASE_ADDR + PGDIR_SIZE) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index dd461c0167ef..2c84c6ad8b50 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -1897,7 +1897,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn) init_top_pgt[0] = __pgd(0); /* Pre-constructed entries are in pfn, so convert to mfn */ - /* L4[272] -> level3_ident_pgt */ + /* L4[273] -> level3_ident_pgt */ /* L4[511] -> level3_kernel_pgt */ convert_pfn_mfn(init_top_pgt); @@ -1917,8 +1917,8 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn) addr[0] = (unsigned long)pgd; addr[1] = (unsigned long)l3; addr[2] = (unsigned long)l2; - /* Graft it onto L4[272][0]. Note that we creating an aliasing problem: - * Both L4[272][0] and L4[511][510] have entries that point to the same + /* Graft it onto L4[273][0]. Note that we creating an aliasing problem: + * Both L4[273][0] and L4[511][510] have entries that point to the same * L2 (PMD) tables. Meaning that if you modify it in __va space * it will be also modified in the __ka space! (But if you just * modify the PMD table to point to other PTE's or none, then you From patchwork Tue Oct 23 16:31:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 10653413 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39A7414DE for ; Tue, 23 Oct 2018 16:32:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28F552A124 for ; Tue, 23 Oct 2018 16:32:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26F4C29FBD; Tue, 23 Oct 2018 16:32:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87BF62A11E for ; Tue, 23 Oct 2018 16:32:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B51E36B000C; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AAF7F6B0006; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 976BF6B000A; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 45C266B0005 for ; Tue, 23 Oct 2018 12:32:10 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id s24-v6so919019plp.12 for ; Tue, 23 Oct 2018 09:32:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=fDSmDmTUfiXnXqpbBEdoC/hjfC8AXNaBPt90jXxpoXE=; b=fZArxNcssn5NEtpXjINuAwRJA0RTlUqc/C8bF3e/EInKNHJr0I8Fd77aBk3A74zUwV zh9cdg1ZFzpU/Rhw+rhe+l+nk4uzE/5vMUGBkJicqttHPWICNq347ru+EUFObpoAqVpe oBrM7ayVmPOfgzF88GGBL8nWuDE5JxCVKWxt4cI61sKGlEuctp77vuEB9uX2bUx5qIGl rhpfQzn1ALce8H9TEycEJ7mziqT0LyvHCnRUdoJ8UN0quI4sIMhTjlSUkpnNrBV73j9W FQ23fxydFPOg/CeSWQ8ggpiCKRxwUsdF//2DXdPBdlwV4TOePsUOSrs4S3ALRXYXkTrP 3zAw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfogOlIVMmEQfK1B7t03QM2a/eNKGHoMB7w1OrcSjyK1wuTX3EZfi 53DKLvxepzJ44GWR65LcwUAN9fN1EmHUe1I8oLV7KUZnNKIvixhtaUE+EXZgY2mXUlRLZbn/cKB ZDRZ7gjdUEIfIew6Y6J3DsZdqBQwDaDDxo5nddsuwfT2PljPFK9ahBzPCNxOLmWiB8A== X-Received: by 2002:a63:3747:: with SMTP id g7-v6mr48706813pgn.59.1540312329866; Tue, 23 Oct 2018 09:32:09 -0700 (PDT) X-Google-Smtp-Source: ACcGV62m8c6rU+NT6cmikdgnxjqPYKlzzVbqCEbFRdVOP0AkdYvZ71659PK0+E+AtiFE/k9qHtTp X-Received: by 2002:a63:3747:: with SMTP id g7-v6mr48706731pgn.59.1540312328751; Tue, 23 Oct 2018 09:32:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540312328; cv=none; d=google.com; s=arc-20160816; b=HM6GWXXVZ+iWyIEbU8yxl9hC5GKbWGf+nGiV/ajmWhQ0wEyuTe+eKkvapRuOHgs2qB 4ZPpEuThIB9v7EX8oI+ECcV+nYfnjyCeR12mbh5JqmeP3d+ToWuXMT3X48/tx2/TdtvJ EZ5hsh/nSBeXmJ/zIVT/jANt+C2iyTpE5j8/xRvPOwy26GIW/FtaNX23QI0P3QJXfCyU f1KLvlr+cI7hL7U/9NB3GsYeP4MRYlTHwdMBGjPUS7ISJwDeAKfCEimJ2CWqHXN0aGOW J6mRlb37T3LBc6C/hsvHBUW5AA/wTFFsI3gvWvEaqIpusKbcA/IzqRD8ubMnR7D+Vs7l Zhvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=fDSmDmTUfiXnXqpbBEdoC/hjfC8AXNaBPt90jXxpoXE=; b=b6Zo3gdubWP8q0aC6QV2IkMJUeGBSNDU7zTKDM/tP8N7eWFXvIlSJhQ7qmK4COIvS+ bB5+v46KJqsLRHUqIn6E/QbjNTiWKITuF1CcId/fqErbA7WTHf/VNcSAp1/r14E0/1gH 2XCPQ1GxlAmmtTQ01YAwZ04amJe03SQua+K2nKsqD95nsKslPTWvOxtAu0ln4zIornJk WYOFJfQjtmczDUyoVm8N2IsSs34TEbGXayNbmObQu4vt1SagOjhjaK//RVpooF6l650x kh5ahkSD5HrJksjm7/0H+ixnct/UBnnzwchR7l4yAIvo2ZZdjqk/ZCT0nesAE7gZYHy5 l91Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga01.intel.com (mga01.intel.com. [192.55.52.88]) by mx.google.com with ESMTPS id w16-v6si1573668ply.155.2018.10.23.09.32.08 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 09:32:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.88 as permitted sender) client-ip=192.55.52.88; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Oct 2018 09:32:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,416,1534834800"; d="scan'208";a="97862347" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 23 Oct 2018 09:32:05 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 5074827D; Tue, 23 Oct 2018 19:32:04 +0300 (EEST) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org Cc: x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH 2/2] x86/ldt: Unmap PTEs for the slow before freeing LDT Date: Tue, 23 Oct 2018 19:31:57 +0300 Message-Id: <20181023163157.41441-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181023163157.41441-1-kirill.shutemov@linux.intel.com> References: <20181023163157.41441-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP modify_ldt(2) leaves old LDT mapped after we switch over to the new one. Memory for the old LDT gets freed and the pages can be re-used. Leaving the mapping in place can have security implications. The mapping is present in userspace copy of page tables and Meltdown-like attack can read these freed and possibly reused pages. It's relatively simple to fix: just unmap the old LDT and flush TLB before freeing LDT memory. We can now avoid flushing TLB on map_ldt_struct() as the slot is unmapped and flushed by unmap_ldt_struct() (or never mapped in the first place). The overhead of the change should be negligible. It shouldn't be a particularly hot path anyway. Signed-off-by: Kirill A. Shutemov Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") --- arch/x86/kernel/ldt.c | 59 ++++++++++++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 21 deletions(-) diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index 733e6ace0fa4..8767fea41309 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -199,14 +199,6 @@ static void sanity_check_ldt_mapping(struct mm_struct *mm) /* * If PTI is enabled, this maps the LDT into the kernelmode and * usermode tables for the given mm. - * - * There is no corresponding unmap function. Even if the LDT is freed, we - * leave the PTEs around until the slot is reused or the mm is destroyed. - * This is harmless: the LDT is always in ordinary memory, and no one will - * access the freed slot. - * - * If we wanted to unmap freed LDTs, we'd also need to do a flush to make - * it useful, and the flush would slow down modify_ldt(). */ static int map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) @@ -214,8 +206,7 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) unsigned long va; bool is_vmalloc; spinlock_t *ptl; - pgd_t *pgd; - int i; + int i, nr_pages; if (!static_cpu_has(X86_FEATURE_PTI)) return 0; @@ -229,16 +220,10 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) /* Check if the current mappings are sane */ sanity_check_ldt_mapping(mm); - /* - * Did we already have the top level entry allocated? We can't - * use pgd_none() for this because it doens't do anything on - * 4-level page table kernels. - */ - pgd = pgd_offset(mm, LDT_BASE_ADDR); - is_vmalloc = is_vmalloc_addr(ldt->entries); - for (i = 0; i * PAGE_SIZE < ldt->nr_entries * LDT_ENTRY_SIZE; i++) { + nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE); + for (i = 0; i < nr_pages; i++) { unsigned long offset = i << PAGE_SHIFT; const void *src = (char *)ldt->entries + offset; unsigned long pfn; @@ -272,13 +257,39 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) /* Propagate LDT mapping to the user page-table */ map_ldt_struct_to_user(mm); - va = (unsigned long)ldt_slot_va(slot); - flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, 0); - ldt->slot = slot; return 0; } +static void +unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt) +{ + unsigned long va; + int i, nr_pages; + + if (!ldt) + return; + + /* LDT map/unmap is only required for PTI */ + if (!static_cpu_has(X86_FEATURE_PTI)) + return; + + nr_pages = DIV_ROUND_UP(ldt->nr_entries * LDT_ENTRY_SIZE, PAGE_SIZE); + for (i = 0; i < nr_pages; i++) { + unsigned long offset = i << PAGE_SHIFT; + pte_t *ptep; + spinlock_t *ptl; + + va = (unsigned long)ldt_slot_va(ldt->slot) + offset; + ptep = get_locked_pte(mm, va, &ptl); + pte_clear(mm, va, ptep); + pte_unmap_unlock(ptep, ptl); + } + + va = (unsigned long)ldt_slot_va(ldt->slot); + flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, 0); +} + #else /* !CONFIG_PAGE_TABLE_ISOLATION */ static int @@ -286,6 +297,11 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) { return 0; } + +static void +unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt) +{ +} #endif /* CONFIG_PAGE_TABLE_ISOLATION */ static void free_ldt_pgtables(struct mm_struct *mm) @@ -524,6 +540,7 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode) } install_ldt(mm, new_ldt); + unmap_ldt_struct(mm, old_ldt); free_ldt_struct(old_ldt); error = 0;