From patchwork Tue Jun 12 14:39:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 10460527 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 34465602A0 for ; Tue, 12 Jun 2018 14:40:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20DA62883B for ; Tue, 12 Jun 2018 14:40:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1578B28845; Tue, 12 Jun 2018 14:40:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7435828844 for ; Tue, 12 Jun 2018 14:40:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8F926B026F; Tue, 12 Jun 2018 10:39:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CE2CB6B026D; Tue, 12 Jun 2018 10:39:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B34FB6B0273; Tue, 12 Jun 2018 10:39:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 5FB056B026F for ; Tue, 12 Jun 2018 10:39:35 -0400 (EDT) Received: by mail-pg0-f69.google.com with SMTP id k13-v6so7826838pgr.11 for ; Tue, 12 Jun 2018 07:39:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=6JmWva/CnN5cXVDOFugWt8JEQhSkdwakRVo9WZlk45Q=; b=ppxlNo2F/UWIrkvISlTtPSxlMaZ/5wUIrV/p+oZY0H/v+Q6oLob4wqN56om9ToDLNf 8nfWvb/hFVk5f2XclP28Ki0ZyqlManPcJd9Gb4X/fz0YK0hOK1A43aWGT7UTVTcPthAw ZuJqtWHtPEXdltqOzTk6MbcROOEi3H88lQbzG9iZ6cQiW4WX9oSmQ7xl5H/ZkGN9Mo6t m4fPNtq8/OJNX9frP6MCF9Ls8NWmAKW+hf+xaG/18vVJTnTjYtxpQvmUucqiWb+3/zXQ 7hTGSpcPSE6sWiHFSNUTNT2Bsd0m0V/YUYy8e4vGqjPIliYCZw2DixFPqUAr+c9YNAAU SyCg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APt69E38DFgFuo3ufnpah0gAfxV4+6l1DvoncCYOiefW1hAOmEZ+pmAJ uitivXiUGHb8dw/YrKQ/FgOiPix0++rEFf75rzsGgmMXScrLogPdfffiO/1L/u0RN5FBfif/wzR Z+XemcrPru8sxQ3Q9NKohUhSkP0uLxOUxaqrbIqvAtZLzGM4UGGpD1NFU6vRLzXD4Mg== X-Received: by 2002:a63:ae06:: with SMTP id q6-v6mr548250pgf.255.1528814375043; Tue, 12 Jun 2018 07:39:35 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIFjrx1vM8/P2vACYduUjQiGnyjo9feGmAE42CPdX3LHlGIpcsgkDdDRQmZ+DTdFyOlRvm+ X-Received: by 2002:a63:ae06:: with SMTP id q6-v6mr547898pgf.255.1528814368099; Tue, 12 Jun 2018 07:39:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528814368; cv=none; d=google.com; s=arc-20160816; b=fS6BPhw+ZVC1BXX6dtfPxbpvOejPVRhyTlAsYLBIFfjPSqZ8+mcvSZqlZVwzZwFl90 ulsm+MJCenrcZw4KzIVmIKWI0wlPvDfEnI20LrE40ukbcTiSrXBq+tYJXf7lB6mJckW5 DuNqtOOi83MsXIsrdBpch0CCmB26U0dLpp9ozw5kjcsEZu2wRD+/19JxhOyVClj4zqLJ MOeYCZReMs/a6JD/l0V9WcllXKq/0Mq8YCOQZ+K9YcJ25mBujm9w2qmAqX/9Shlz7qwo LVIq2UpK3B5ekB+1CfV+OKzSf39PuG2LefqxvcZeqGJiHqQwUUV1rQD9jYEaV3GuzvF5 F0KQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=6JmWva/CnN5cXVDOFugWt8JEQhSkdwakRVo9WZlk45Q=; b=nS+TnOPbFedifU3dLjbbvSNg5YcLC6KVJ6E65mI174GYNU/0KvwAoU6CITF+5CQGEA exGfeb48vSjp0IfFf10WSKahNmM5C4bhFIuutVq7PYPlbmMZhN/1Rjrx5NuzFoubGey/ 7srLA7dvQQDE77YAU9M2xSGIbLboO9Q56DxUMNwxRa6MyY1QpcbfWeDcpbsTJeBhuX+i 3vi134qFvsqrz8VMstS787nXp5TMWRD94Ld1S/5QUYvS7a8ropJWLl23eTiCRxlHrBzw wIkM25bBhVGLBS2o2UWiETNbjVNkiZP1qYt72MoRAu1s2NREbwGV5EmIhMlQqBIcuCek ghRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga17.intel.com (mga17.intel.com. [192.55.52.151]) by mx.google.com with ESMTPS id j195-v6si223743pgc.543.2018.06.12.07.39.27 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jun 2018 07:39:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) client-ip=192.55.52.151; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Jun 2018 07:39:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,215,1526367600"; d="scan'208";a="63432755" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 12 Jun 2018 07:39:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 3886F648; Tue, 12 Jun 2018 17:39:21 +0300 (EEST) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv3 14/17] x86/mm: Introduce direct_mapping_size Date: Tue, 12 Jun 2018 17:39:12 +0300 Message-Id: <20180612143915.68065-15-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180612143915.68065-1-kirill.shutemov@linux.intel.com> References: <20180612143915.68065-1-kirill.shutemov@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Kernel need to have a way to access encrypted memory. We are going to use per-KeyID direct mapping to facilitate the access with minimal overhead. Direct mapping for each KeyID will be put next to each other in the virtual address space. We need to have a way to find boundaries of direct mapping for particular KeyID. The new variable direct_mapping_size specifies the size of direct mapping. With the value, it's trivial to find direct mapping for KeyID-N: PAGE_OFFSET + N * direct_mapping_size. Size of direct mapping is calculated during KASLR setup. If KALSR is disable it happens during MKTME initialization. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/mktme.h | 2 ++ arch/x86/include/asm/page_64.h | 1 + arch/x86/kernel/head64.c | 2 ++ arch/x86/mm/kaslr.c | 21 ++++++++++++--- arch/x86/mm/mktme.c | 48 ++++++++++++++++++++++++++++++++++ 5 files changed, 71 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/mktme.h b/arch/x86/include/asm/mktme.h index 9363b989a021..3bf481fe3f56 100644 --- a/arch/x86/include/asm/mktme.h +++ b/arch/x86/include/asm/mktme.h @@ -40,6 +40,8 @@ int page_keyid(const struct page *page); void mktme_disable(void); +void setup_direct_mapping_size(void); + #else #define mktme_keyid_mask ((phys_addr_t)0) #define mktme_nr_keyids 0 diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 939b1cff4a7b..53c32af895ab 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -14,6 +14,7 @@ extern unsigned long phys_base; extern unsigned long page_offset_base; extern unsigned long vmalloc_base; extern unsigned long vmemmap_base; +extern unsigned long direct_mapping_size; static inline unsigned long __phys_addr_nodebug(unsigned long x) { diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index a21d6ace648e..b6175376b2e1 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -59,6 +59,8 @@ EXPORT_SYMBOL(vmalloc_base); unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4; EXPORT_SYMBOL(vmemmap_base); #endif +unsigned long direct_mapping_size __ro_after_init = -1UL; +EXPORT_SYMBOL(direct_mapping_size); #define __head __section(.head.text) diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index 4408cd9a3bef..3d8ef8cb97e1 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -69,6 +69,15 @@ static inline bool kaslr_memory_enabled(void) return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN); } +#ifndef CONFIG_X86_INTEL_MKTME +static void __init setup_direct_mapping_size(void) +{ + direct_mapping_size = max_pfn << PAGE_SHIFT; + direct_mapping_size = round_up(direct_mapping_size, 1UL << TB_SHIFT); + direct_mapping_size += (1UL << TB_SHIFT) * CONFIG_MEMORY_PHYSICAL_PADDING; +} +#endif + /* Initialize base and padding for each memory region randomized with KASLR */ void __init kernel_randomize_memory(void) { @@ -93,7 +102,11 @@ void __init kernel_randomize_memory(void) if (!kaslr_memory_enabled()) return; - kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT); + /* + * Upper limit for direct mapping size is 1/4 of whole virtual + * address space + */ + kaslr_regions[0].size_tb = 1 << (__VIRTUAL_MASK_SHIFT - 1 - TB_SHIFT); kaslr_regions[1].size_tb = VMALLOC_SIZE_TB; /* @@ -101,8 +114,10 @@ void __init kernel_randomize_memory(void) * add padding if needed (especially for memory hotplug support). */ BUG_ON(kaslr_regions[0].base != &page_offset_base); - memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) + - CONFIG_MEMORY_PHYSICAL_PADDING; + + setup_direct_mapping_size(); + + memory_tb = direct_mapping_size * mktme_nr_keyids + 1; /* Adapt phyiscal memory region size based on available memory */ if (memory_tb < kaslr_regions[0].size_tb) diff --git a/arch/x86/mm/mktme.c b/arch/x86/mm/mktme.c index 43a44f0f2a2d..3e5322bf035e 100644 --- a/arch/x86/mm/mktme.c +++ b/arch/x86/mm/mktme.c @@ -89,3 +89,51 @@ static bool need_page_mktme(void) struct page_ext_operations page_mktme_ops = { .need = need_page_mktme, }; + +void __init setup_direct_mapping_size(void) +{ + unsigned long available_va; + + /* 1/4 of virtual address space is didicated for direct mapping */ + available_va = 1UL << (__VIRTUAL_MASK_SHIFT - 1); + + /* How much memory the systrem has? */ + direct_mapping_size = max_pfn << PAGE_SHIFT; + direct_mapping_size = round_up(direct_mapping_size, 1UL << 40); + + if (mktme_status != MKTME_ENUMERATED) + goto out; + + /* + * Not enough virtual address space to address all physical memory with + * MKTME enabled. Even without padding. + * + * Disable MKTME instead. + */ + if (direct_mapping_size > available_va / mktme_nr_keyids + 1) { + pr_err("x86/mktme: Disabled. Not enough virtual address space\n"); + pr_err("x86/mktme: Consider switching to 5-level paging\n"); + mktme_disable(); + goto out; + } + + /* + * Virtual address space is divided between per-KeyID direct mappings. + */ + available_va /= mktme_nr_keyids + 1; +out: + /* Add padding, if there's enough virtual address space */ + direct_mapping_size += (1UL << 40) * CONFIG_MEMORY_PHYSICAL_PADDING; + if (direct_mapping_size > available_va) + direct_mapping_size = available_va; +} + +static int __init mktme_init(void) +{ + /* KASLR didn't initialized it for us. */ + if (direct_mapping_size == -1UL) + setup_direct_mapping_size(); + + return 0; +} +arch_initcall(mktme_init)