From patchwork Wed May 8 14:43:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 10936011 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B75BF1515 for ; Wed, 8 May 2019 14:50:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A838A28AC3 for ; Wed, 8 May 2019 14:50:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9C3DD28ABE; Wed, 8 May 2019 14:50:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36F4628AB6 for ; Wed, 8 May 2019 14:50:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727971AbfEHOuj (ORCPT ); Wed, 8 May 2019 10:50:39 -0400 Received: from mga14.intel.com ([192.55.52.115]:48407 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728279AbfEHOoo (ORCPT ); Wed, 8 May 2019 10:44:44 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 May 2019 07:44:44 -0700 X-ExtLoop1: 1 Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 08 May 2019 07:44:39 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id A195D9EA; Wed, 8 May 2019 17:44:29 +0300 (EEST) From: "Kirill A. Shutemov" To: Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , David Howells Cc: Kees Cook , Dave Hansen , Kai Huang , Jacob Pan , Alison Schofield , linux-mm@kvack.org, kvm@vger.kernel.org, keyrings@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH, RFC 19/62] x86/mm: Handle encrypted memory in page_to_virt() and __pa() Date: Wed, 8 May 2019 17:43:39 +0300 Message-Id: <20190508144422.13171-20-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190508144422.13171-1-kirill.shutemov@linux.intel.com> References: <20190508144422.13171-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Per-KeyID direct mappings require changes into how we find the right virtual address for a page and virt-to-phys address translations. page_to_virt() definition overwrites default macros provided by . Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/page.h | 3 +++ arch/x86/include/asm/page_64.h | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 39af59487d5f..aff30554f38e 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -72,6 +72,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, extern bool __virt_addr_valid(unsigned long kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr)) +#define page_to_virt(x) \ + (__va(PFN_PHYS(page_to_pfn(x))) + page_keyid(x) * direct_mapping_size) + #endif /* __ASSEMBLY__ */ #include diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index f57fc3cc2246..a4f394e3471d 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -24,7 +24,7 @@ static inline unsigned long __phys_addr_nodebug(unsigned long x) /* use the carry flag to determine if x was < __START_KERNEL_map */ x = y + ((x > y) ? phys_base : (__START_KERNEL_map - PAGE_OFFSET)); - return x; + return x & direct_mapping_mask; } #ifdef CONFIG_DEBUG_VIRTUAL