From patchwork Wed May 8 14:44:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 10935933 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 806CE1515 for ; Wed, 8 May 2019 14:47:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71C6428485 for ; Wed, 8 May 2019 14:47:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6598528958; Wed, 8 May 2019 14:47:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B6AB28485 for ; Wed, 8 May 2019 14:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727578AbfEHOrT (ORCPT ); Wed, 8 May 2019 10:47:19 -0400 Received: from mga07.intel.com ([134.134.136.100]:33094 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728451AbfEHOow (ORCPT ); Wed, 8 May 2019 10:44:52 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 May 2019 07:44:50 -0700 X-ExtLoop1: 1 Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 08 May 2019 07:44:46 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 3BB8410AA; Wed, 8 May 2019 17:44:31 +0300 (EEST) From: "Kirill A. Shutemov" To: Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , David Howells Cc: Kees Cook , Dave Hansen , Kai Huang , Jacob Pan , Alison Schofield , linux-mm@kvack.org, kvm@vger.kernel.org, keyrings@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A . Shutemov" Subject: [PATCH, RFC 50/62] kvm, x86, mmu: setup MKTME keyID to spte for given PFN Date: Wed, 8 May 2019 17:44:10 +0300 Message-Id: <20190508144422.13171-51-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190508144422.13171-1-kirill.shutemov@linux.intel.com> References: <20190508144422.13171-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kai Huang Setup keyID to SPTE, which will be eventually programmed to shadow MMU or EPT table, according to page's associated keyID, so that guest is able to use correct keyID to access guest memory. Note current shadow_me_mask doesn't suit MKTME's needs, since for MKTME there's no fixed memory encryption mask, but can vary from keyID 1 to maximum keyID, therefore shadow_me_mask remains 0 for MKTME. Signed-off-by: Kai Huang Signed-off-by: Kirill A. Shutemov --- arch/x86/kvm/mmu.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d9c7b45d231f..bfee0c194161 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2899,6 +2899,22 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) #define SET_SPTE_WRITE_PROTECTED_PT BIT(0) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) +static u64 get_phys_encryption_mask(kvm_pfn_t pfn) +{ +#ifdef CONFIG_X86_INTEL_MKTME + struct page *page; + + if (!pfn_valid(pfn)) + return 0; + + page = pfn_to_page(pfn); + + return ((u64)page_keyid(page)) << mktme_keyid_shift; +#else + return shadow_me_mask; +#endif +} + static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, @@ -2945,7 +2961,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, pte_access &= ~ACC_WRITE_MASK; if (!kvm_is_mmio_pfn(pfn)) - spte |= shadow_me_mask; + spte |= get_phys_encryption_mask(pfn); spte |= (u64)pfn << PAGE_SHIFT;