From patchwork Tue Dec 1 00:47:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Kalra, Ashish" X-Patchwork-Id: 11941719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52128C8300F for ; Tue, 1 Dec 2020 00:48:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 14B0D20857 for ; Tue, 1 Dec 2020 00:48:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="E9zWfmhH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731066AbgLAAse (ORCPT ); Mon, 30 Nov 2020 19:48:34 -0500 Received: from mail-bn7nam10on2050.outbound.protection.outlook.com ([40.107.92.50]:4864 "EHLO NAM10-BN7-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728901AbgLAAse (ORCPT ); Mon, 30 Nov 2020 19:48:34 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oS2KJlDUDX21FqDhlFfWvbTKDfV68i1oKiWX+NCwH7I6ken177GbRn9E47aWXGWjm38vjO8v56POR0I01mgKEeWxxXUbT+7pXG4aIkfRdGdPsE3l+UMsHyyIb0KGp/dlUQO+Ty9apODks6AsP0798qHd8Cw4qkIjt5U+B0qw5DVZT04QBRPwOjG8O+l9mKekkczMqCg963aMHvBGrOUzp0DcqHjXY2yrqiTH+c+Gk+vAqxihyynK2GSKN1HZc7Oa6HO7+ZyWoXokpoCHZ5F6hJIwiYrXe9D+rJpnxwnrYoptYR/ihRHHdcWMfgF8Syg38sbRZpY6g+ngFhz76MpHtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZhKI0DkAsAvx2gmnAyq5fMDUyMXKmNtJ5iq1d326ZZk=; b=HJhKItLffwQfOKToxR4uxtL91wQXnD3Iqy2A0jV8fYJu+xLdW1FfP7rw5fgKha3fUASZQ6TXHn8PRsEAqpgrvIZcpg5Rw7r/IRm7P8E8YlNOLZQUvduKvaa7RV8c785kthGhi3RWlO0eyq+okyqNuKT+rSPnT5xuFXfYCbG5rbSDyNGrPhZjqvJoSlCMvwBile6EQNX241WwwEaSGwUnXDwNS/DFAhxIXJuIkvvTDipCLBkmr6xAgY9DD1ZGKM5Rt+tRNTnLqmcGkzVP3syEKUz9pPdqv6uz3yTQ7Xup5MeA6QKXxudEPXk2I6aVgWwmumrAiSxwTdfGZUJY1NADrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZhKI0DkAsAvx2gmnAyq5fMDUyMXKmNtJ5iq1d326ZZk=; b=E9zWfmhHMdxdaVWvz9S1vgJDBCNFbGJdXkwPrmx0RrKD2Echyt+4Nv1aGDc8EMbtnvkfWdPT88GKq4AoWddon67PCYgjjE3nxRhNkO+DoYk3fSM+npnxOY8EKZxRLej40jSlYOL/6cftr4AeLoxKOR+8Jdpscm+gDGS4POWAGwg= Authentication-Results: redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=none action=none header.from=amd.com; Received: from SN6PR12MB2767.namprd12.prod.outlook.com (2603:10b6:805:75::23) by SN6PR12MB4751.namprd12.prod.outlook.com (2603:10b6:805:df::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Tue, 1 Dec 2020 00:47:30 +0000 Received: from SN6PR12MB2767.namprd12.prod.outlook.com ([fe80::d8f2:fde4:5e1d:afec]) by SN6PR12MB2767.namprd12.prod.outlook.com ([fe80::d8f2:fde4:5e1d:afec%3]) with mapi id 15.20.3611.025; Tue, 1 Dec 2020 00:47:30 +0000 From: Ashish Kalra To: pbonzini@redhat.com Cc: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, joro@8bytes.org, bp@suse.de, thomas.lendacky@amd.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, srutherford@google.com, brijesh.singh@amd.com, dovmurik@linux.vnet.ibm.com, tobin@ibm.com, jejb@linux.ibm.com, frankeh@us.ibm.com, dgilbert@redhat.com Subject: [PATCH v2 4/9] mm: x86: Invoke hypercall when page encryption status is changed. Date: Tue, 1 Dec 2020 00:47:20 +0000 Message-Id: <3b095071f6a6ddf11e3ccee94fada9605131ab74.1606782580.git.ashish.kalra@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: SN4PR0601CA0021.namprd06.prod.outlook.com (2603:10b6:803:2f::31) To SN6PR12MB2767.namprd12.prod.outlook.com (2603:10b6:805:75::23) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from ashkalra_ubuntu_server.amd.com (165.204.77.1) by SN4PR0601CA0021.namprd06.prod.outlook.com (2603:10b6:803:2f::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend Transport; Tue, 1 Dec 2020 00:47:29 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 326c698e-0e2a-428e-9f12-08d89592aebc X-MS-TrafficTypeDiagnostic: SN6PR12MB4751: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4/XegLNebeQUf3qlxMF2lcRJL7xLmE/037NbcIXhFpOxQIkPbNrvtAzUmAXN/VGBZjNbNntojpOZAmPa6QYEmqetOTf0lUNr7temzijCDf/rhJy1bYJgSHDwLTDhdPK1whYvkftj7F3mMVOqUOIiuMj3a+e4BiC9y06CaCzAPykm7yUJe5UBUMjpeK6jWafoiVVnlPOyCV7h1QjYT7vodgjZGNdnZS41sI2sXGA43czdaeykMzq4pGAUCnkPD7tXC1Kgmn5jqavfgsB0A6donFZBrYEFD2caSZwpbbmBdmpEhHhRa1PZ9x04W+zX/9gTl+rGtqVZLbiQa0FaIk5l/g== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2767.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(366004)(39860400002)(136003)(83380400001)(6666004)(186003)(2616005)(6916009)(478600001)(66574015)(4326008)(316002)(16526019)(52116002)(956004)(26005)(6486002)(7696005)(8936002)(8676002)(7416002)(86362001)(66946007)(66476007)(5660300002)(66556008)(2906002)(36756003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?q?0Ha2ZAp/ncEQjxNLO2SrdO0gudoXub?= =?utf-8?q?WINm87Q99QXNn85Qz36RPhDT/QWkpG3Fub5SnDKYpB0QiGB6BK1+am9+tV7qDT2q1?= =?utf-8?q?bfiuuCEP+hgnIJBPzKp1RTcKutVBIbhoeKNZx/fZtispox/3o8EbdGVCSjQsjsUBY?= =?utf-8?q?TnVd1zv8KtnAqX5yLhSwila8L8Dem8KIgsUI4YW1Nbmt+j4TQjBvajx7V4R/TqIcN?= =?utf-8?q?VSG0OeJSsCkAwsM/x5/49ZNUS7iWsMaRBGP1A6IpY3HvoqGUUZKu2HsqaCIVJWr3S?= =?utf-8?q?tKtKeuMNh8dBmA2Uuzzy1F5NxRgUfbIgbYvxO0E/lewhad4rQL1k9HqVmP/ojL4wQ?= =?utf-8?q?E+7KwK31b7jcZpsxD1ddwjy8SSoljA/2dxldx3dzCCqoaHjnR1ZRdtRIYkQAoZvOy?= =?utf-8?q?MAQGHwTnT94Dgvx9We0mEnWGmVoonO4PDYTo6mGF4f4Zl3FxyCQQA87eirBIolBXY?= =?utf-8?q?btjOLBAmhz+X5lzvT9UBQ7j/dKuSDmGKF4HZpMjc4wtIRWB56H8utoKhggKg/Jjb0?= =?utf-8?q?YoIZEfQnr/COVa7W12tGRCog0R57h7qhU8NUbp21PlN4ze3yA1Ra9CgML35g46PFS?= =?utf-8?q?4DIb2jX3mq6UvALCAsQC1NTUG8OByDSWcDIUy8DnV+QGXw+RWzxDqiqV8t4+RQTfG?= =?utf-8?q?B+ijLVrjAp2XzQxzmTAwhh1aFo7ZW6nw0D9ZIEDpcDPPAIcj/nxXzLVp48KLrLIfz?= =?utf-8?q?LS9QFnbqF4Ewc4G/iD1ClECHE4nJZx8iQQJKKRKoJtHrIdC6olCKXcllrqIF3S8uo?= =?utf-8?q?ALQ4G4JTu4Fh7bb9CvdMCeGiAE/Fh9TI7w2qyGx4RfT52AxiGm/w9fV/uKnP/RvEl?= =?utf-8?q?2wVLGeIsGj4aihW/DLBIWvBGiqcwKsxY0sRrZVGfxjypA6T9G6W9NTRxPdpEtv7pX?= =?utf-8?q?AAGlf6rJYu4o+qNN7LtG2OaGohF7s0zOWe7UZr0gIn69feau0ISQ7MGmyrMbWSaOu?= =?utf-8?q?t8KrYsq4FlRAj82Ktfs?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 326c698e-0e2a-428e-9f12-08d89592aebc X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2767.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Dec 2020 00:47:30.3635 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 92LVBXCLxOXtKuqTLMNYY10AKhgv1YVzmOPn19kM3tuwFev4pWmwVDj/a681ts5qYMnlcfuUfxKge3p0+V60dA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB4751 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Brijesh Singh Invoke a hypercall when a memory region is changed from encrypted -> decrypted and vice versa. Hypervisor needs to know the page encryption status during the guest migration. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: Paolo Bonzini Cc: "Radim Krčmář" Cc: Joerg Roedel Cc: Borislav Petkov Cc: Tom Lendacky Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Venu Busireddy Signed-off-by: Brijesh Singh Signed-off-by: Ashish Kalra --- arch/x86/include/asm/paravirt.h | 10 +++++ arch/x86/include/asm/paravirt_types.h | 2 + arch/x86/kernel/paravirt.c | 1 + arch/x86/mm/mem_encrypt.c | 57 ++++++++++++++++++++++++++- arch/x86/mm/pat/set_memory.c | 7 ++++ 5 files changed, 76 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index d25cc6830e89..7aeb7c508c53 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -84,6 +84,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) PVOP_VCALL1(mmu.exit_mmap, mm); } +static inline void page_encryption_changed(unsigned long vaddr, int npages, + bool enc) +{ + PVOP_VCALL3(mmu.page_encryption_changed, vaddr, npages, enc); +} + #ifdef CONFIG_PARAVIRT_XXL static inline void load_sp0(unsigned long sp0) { @@ -840,6 +846,10 @@ static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm, static inline void paravirt_arch_exit_mmap(struct mm_struct *mm) { } + +static inline void page_encryption_changed(unsigned long vaddr, int npages, bool enc) +{ +} #endif #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_PARAVIRT_H */ diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 0fad9f61c76a..d7787ec4d19f 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -209,6 +209,8 @@ struct pv_mmu_ops { /* Hook for intercepting the destruction of an mm_struct. */ void (*exit_mmap)(struct mm_struct *mm); + void (*page_encryption_changed)(unsigned long vaddr, int npages, + bool enc); #ifdef CONFIG_PARAVIRT_XXL struct paravirt_callee_save read_cr2; diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 6c3407ba6ee9..52913356b6fa 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -340,6 +340,7 @@ struct paravirt_patch_template pv_ops = { (void (*)(struct mmu_gather *, void *))tlb_remove_page, .mmu.exit_mmap = paravirt_nop, + .mmu.page_encryption_changed = paravirt_nop, #ifdef CONFIG_PARAVIRT_XXL .mmu.read_cr2 = __PV_IS_CALLEE_SAVE(native_read_cr2), diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index bc0833713be9..9d1ac65050d0 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -29,6 +30,7 @@ #include #include #include +#include #include "mm_internal.h" @@ -198,6 +200,47 @@ void __init sme_early_init(void) swiotlb_force = SWIOTLB_FORCE; } +static void set_memory_enc_dec_hypercall(unsigned long vaddr, int npages, + bool enc) +{ + unsigned long sz = npages << PAGE_SHIFT; + unsigned long vaddr_end, vaddr_next; + + vaddr_end = vaddr + sz; + + for (; vaddr < vaddr_end; vaddr = vaddr_next) { + int psize, pmask, level; + unsigned long pfn; + pte_t *kpte; + + kpte = lookup_address(vaddr, &level); + if (!kpte || pte_none(*kpte)) + return; + + switch (level) { + case PG_LEVEL_4K: + pfn = pte_pfn(*kpte); + break; + case PG_LEVEL_2M: + pfn = pmd_pfn(*(pmd_t *)kpte); + break; + case PG_LEVEL_1G: + pfn = pud_pfn(*(pud_t *)kpte); + break; + default: + return; + } + + psize = page_level_size(level); + pmask = page_level_mask(level); + + kvm_sev_hypercall3(KVM_HC_PAGE_ENC_STATUS, + pfn << PAGE_SHIFT, psize >> PAGE_SHIFT, enc); + + vaddr_next = (vaddr & pmask) + psize; + } +} + static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) { pgprot_t old_prot, new_prot; @@ -255,12 +298,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) static int __init early_set_memory_enc_dec(unsigned long vaddr, unsigned long size, bool enc) { - unsigned long vaddr_end, vaddr_next; + unsigned long vaddr_end, vaddr_next, start; unsigned long psize, pmask; int split_page_size_mask; int level, ret; pte_t *kpte; + start = vaddr; vaddr_next = vaddr; vaddr_end = vaddr + size; @@ -315,6 +359,8 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, ret = 0; + set_memory_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT, + enc); out: __flush_tlb_all(); return ret; @@ -448,6 +494,15 @@ void __init mem_encrypt_init(void) if (sev_active()) static_branch_enable(&sev_enable_key); +#ifdef CONFIG_PARAVIRT + /* + * With SEV, we need to make a hypercall when page encryption state is + * changed. + */ + if (sev_active()) + pv_ops.mmu.page_encryption_changed = set_memory_enc_dec_hypercall; +#endif + print_mem_encrypt_feature_info(); } diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 40baa90e74f4..dcd4557bb7fa 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "../mm_internal.h" @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) */ cpa_flush(&cpa, 0); + /* Notify hypervisor that a given memory range is mapped encrypted + * or decrypted. The hypervisor will use this information during the + * VM migration. + */ + page_encryption_changed(addr, numpages, enc); + return ret; }