From patchwork Fri May 22 12:51:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565551 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF67290 for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D33D02085B for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="RZmS5aGo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729796AbgEVMwV (ORCPT ); Fri, 22 May 2020 08:52:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728898AbgEVMwU (ORCPT ); Fri, 22 May 2020 08:52:20 -0400 Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DC99C05BD43 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: by mail-lj1-x241.google.com with SMTP id o14so12528311ljp.4 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=RZmS5aGoxiEWf9GuUWonhlLEEnOL/5aU5nssz02nf/5SdXl7slR5EEYNs/4PSzvG7Z OIBZmyNLeNfUVd9CBS3x2JRsgMjUJtKmqgUg3/8kZfq6ZbRdgzt/ygHJlYwMEbP8ckSe Osr8w/5iuea4QUPKom2JgGBanIVuVe7v1DvGbNuZfXNy/NOHMFubiWQ7w+0xnU6DvkKM iBbwHSs+aX6zSPzMNbc8Vnvh+YhL2rZrjhyYC6ClzNUk0Gq5XJxPA9Ma4lZvu7HiCMS0 hLLe5Ms5M7Abmcvn1C2ToiDEdgV904P22iOK27KYOA/X+C/xBlkBHwTkmOlNo6KAYruK 6Nig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=XS1ZGtKqT3C+BVpBBQOEkJgUAUknCvupX6DjY5gAR9V6oU2ePiK3lMybsD2UnXDDCR XsqsTAkZ16netOKNKKV5QVbOQO9BzHQF93an14/+evxRRQK+BpSwAqHSx7FHqXv9ef3C H2m6pUWNHJGarO6aHZbWaCHD4tNBWAzKStKXrnOIL1ddvVgJtbjRcSpx4GZjb1HsnY5d XDPo7iGuN5nZJwEYK6/8iu76s66sWXCUIVyjZy1wZUHUI4eKfqyj5zq5PnYmvYQYJH/k mkwWdeQ5xsrwPw47Y6jiJ3C866TN/LDw6B/NZjMvxz6r1dWQr2lLR791zqwqSJ9O4Tq+ z0eg== X-Gm-Message-State: AOAM531jB0LBbSZQM2/kkmFAuMDnu1oWhG8IG0VmfUg5N4HHM9EQ3vnf 6q+LV0+59abxAQ0aUQ2TqU6F0w== X-Google-Smtp-Source: ABdhPJzQ6T1oKzjxH1v0MRdbIIoC0LWPz5Z4w0jhXU9Ef/krG9OSTf62OuiRiejZ7fPahTpNDSD5dA== X-Received: by 2002:a2e:9196:: with SMTP id f22mr6908669ljg.21.1590151938451; Fri, 22 May 2020 05:52:18 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id z22sm2386655lfi.96.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id A1076101EB3; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 01/16] x86/mm: Move force_dma_unencrypted() to common code Date: Fri, 22 May 2020 15:51:59 +0300 Message-Id: <20200522125214.31348-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org force_dma_unencrypted() has to return true for KVM guest with the memory protected enabled. Move it out of AMD SME code. Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This is preparation for the following patches. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 8 +++++-- arch/x86/include/asm/io.h | 4 +++- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 30 ------------------------- arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++ 5 files changed, 49 insertions(+), 33 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2d3f963fd6f1..bc72bfd89bcf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1518,12 +1518,16 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select X86_MEM_ENCRYPT_COMMON ---help--- Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index e1aa17a468a8..c58d52fd7bf2 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -256,10 +256,12 @@ static inline void slow_down_io(void) #endif -#ifdef CONFIG_AMD_MEM_ENCRYPT #include extern struct static_key_false sev_enable_key; + +#ifdef CONFIG_AMD_MEM_ENCRYPT + static inline bool sev_key_active(void) { return static_branch_unlikely(&sev_enable_key); diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 98f7c6fa2eaa..af8683c053a3 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -49,6 +49,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index a03614bd3e1a..112304a706f3 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -15,10 +15,6 @@ #include #include #include -#include -#include -#include -#include #include #include @@ -350,32 +346,6 @@ bool sev_active(void) return sme_me_mask && sev_enabled; } -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) -{ - /* - * For SEV, all DMA must be to unencrypted addresses. - */ - if (sev_active()) - return true; - - /* - * For SME, all DMA must be to unencrypted addresses if the - * device does not support DMA to addresses that include the - * encryption mask. - */ - if (sme_active()) { - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, - dev->bus_dma_limit); - - if (dma_dev_mask <= dma_enc_mask) - return true; - } - - return false; -} - /* Architecture __weak replacement functions */ void __init mem_encrypt_free_decrypted_mem(void) { diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..964e04152417 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + */ + +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For SEV, all DMA must be to unencrypted/shared addresses. + */ + if (sev_active()) + return true; + + /* + * For SME, all DMA must be to unencrypted addresses if the + * device does not support DMA to addresses that include the + * encryption mask. + */ + if (sme_active()) { + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, + dev->bus_dma_limit); + + if (dma_dev_mask <= dma_enc_mask) + return true; + } + + return false; +} From patchwork Fri May 22 12:52:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565613 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F25590 for ; Fri, 22 May 2020 12:54:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33BA7206D5 for ; Fri, 22 May 2020 12:54:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="L10nCGuh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730376AbgEVMyL (ORCPT ); Fri, 22 May 2020 08:54:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729047AbgEVMwU (ORCPT ); Fri, 22 May 2020 08:52:20 -0400 Received: from mail-lj1-x242.google.com (mail-lj1-x242.google.com [IPv6:2a00:1450:4864:20::242]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70265C08C5C1 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: by mail-lj1-x242.google.com with SMTP id v16so12502523ljc.8 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S+L43kzTWeV0qBo7/UXpF9mjurVSaJfxkFKdWz5la+8=; b=L10nCGuhKTvFcjopSTmqmu6TMttbOmfpAsWfm6aIGQA6P1R+wcUVFl1kMoxkQpr/Vr T/32LdrA4xVd1SJpudUw2/5zyKsaH51gEL9zXs3gJgSY64iKwW1DR1GtYwt9Ezmenij8 o4Nw4HaR4ZIi4zQiZSCGz6Z9BihS+pe9Qheuty0ulaJQdRS4WRQ0ugsPAd+HEbCZF071 0VRoI+63GAoIyKDgw3bpM/jwHrWXUmOjrKz+ZyFRAHOyDdMKsJeD93A6kb8lkAofAtAk iQ7AN9JCC8C7qxEFkUS5MqJNkL2O8GLBye8FWmv0Oz3rpZKZ8Ok5N43GLlyqj+zfnLlN P/AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S+L43kzTWeV0qBo7/UXpF9mjurVSaJfxkFKdWz5la+8=; b=eSnXsOdB97gpknAe2YvN7ex7/qAHkCQR7urIQtepo2KooNe0M/yfObiIGb3T0vGAk4 Muzx0PldvfYR11WN9KMN5hDQSRQgKMo0sNmxSOXLo+gUj8xTJm8CY8FSTB6TlarcOA1U xvzL7MbTJ//Ylwo9OiJCfQOv+25BG6gBAVHgxIA9cWFoDl/2zBshfOOcqxtfOf8Wyuht i7L3VHB4B4vP0UH4ShL8v1uIo/6mJZ6HoQkyB2QffWVce1Glq1oHf0hy8Nng8E1VldUi qwtU21HwDNKvkW9Vrejb8d+/OiSU6IhYlGwEKVGiwX8MjadAQEkeGDZab97AjmyU2pm5 Rlwg== X-Gm-Message-State: AOAM531hPvlv5rwJpb+2wrZ2NeWlrdB1u2DocFz6oNkZ2gbzcXORO1tT f54ZBe5VBCs+PTLQp4ISmXsuyw== X-Google-Smtp-Source: ABdhPJwBRkFSJRPO5VTttkswQA1+IQChU9KFl1wv2ylCjjjm9j08u45iNHd4WuA3Qe/3uKHyAKklAg== X-Received: by 2002:a2e:9bc3:: with SMTP id w3mr7882222ljj.170.1590151938848; Fri, 22 May 2020 05:52:18 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j133sm2404003lfd.58.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id A96B110204C; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 02/16] x86/kvm: Introduce KVM memory protection feature Date: Fri, 22 May 2020 15:52:00 +0300 Message-Id: <20200522125214.31348-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Provide basic helpers, KVM_FEATURE and a hypercall. Host side doesn't provide the feature yet, so it is a dead code for now. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/kvm_para.h | 5 +++++ arch/x86/include/uapi/asm/kvm_para.h | 3 ++- arch/x86/kernel/kvm.c | 16 ++++++++++++++++ include/uapi/linux/kvm_para.h | 3 ++- 4 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h index 9b4df6eaa11a..3ce84fc07144 100644 --- a/arch/x86/include/asm/kvm_para.h +++ b/arch/x86/include/asm/kvm_para.h @@ -10,11 +10,16 @@ extern void kvmclock_init(void); #ifdef CONFIG_KVM_GUEST bool kvm_check_and_clear_guest_paused(void); +bool kvm_mem_protected(void); #else static inline bool kvm_check_and_clear_guest_paused(void) { return false; } +static inline bool kvm_mem_protected(void) +{ + return false; +} #endif /* CONFIG_KVM_GUEST */ #define KVM_HYPERCALL \ diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 2a8e0b6b9805..c3b499acc98f 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -28,9 +28,10 @@ #define KVM_FEATURE_PV_UNHALT 7 #define KVM_FEATURE_PV_TLB_FLUSH 9 #define KVM_FEATURE_ASYNC_PF_VMEXIT 10 -#define KVM_FEATURE_PV_SEND_IPI 11 +#define KVM_FEATURE_PV_SEND_IPI 11 #define KVM_FEATURE_POLL_CONTROL 12 #define KVM_FEATURE_PV_SCHED_YIELD 13 +#define KVM_FEATURE_MEM_PROTECTED 14 #define KVM_HINTS_REALTIME 0 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6efe0410fb72..bda761ca0d26 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -35,6 +35,13 @@ #include #include +static bool mem_protected; + +bool kvm_mem_protected(void) +{ + return mem_protected; +} + static int kvmapf = 1; static int __init parse_no_kvmapf(char *arg) @@ -727,6 +734,15 @@ static void __init kvm_init_platform(void) { kvmclock_init(); x86_platform.apic_post_init = kvm_apic_init; + + if (kvm_para_has_feature(KVM_FEATURE_MEM_PROTECTED)) { + if (kvm_hypercall0(KVM_HC_ENABLE_MEM_PROTECTED)) { + pr_err("Failed to enable KVM memory protection\n"); + return; + } + + mem_protected = true; + } } const __initconst struct hypervisor_x86 x86_hyper_kvm = { diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 8b86609849b9..1a216f32e572 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -27,8 +27,9 @@ #define KVM_HC_MIPS_EXIT_VM 7 #define KVM_HC_MIPS_CONSOLE_OUTPUT 8 #define KVM_HC_CLOCK_PAIRING 9 -#define KVM_HC_SEND_IPI 10 +#define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 +#define KVM_HC_ENABLE_MEM_PROTECTED 12 /* * hypercalls use architecture specific From patchwork Fri May 22 12:52:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565601 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C805090 for ; Fri, 22 May 2020 12:53:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B137620723 for ; Fri, 22 May 2020 12:53:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="j6Temqp5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729890AbgEVMwZ (ORCPT ); Fri, 22 May 2020 08:52:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729805AbgEVMwV (ORCPT ); Fri, 22 May 2020 08:52:21 -0400 Received: from mail-lj1-x243.google.com (mail-lj1-x243.google.com [IPv6:2a00:1450:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E50DC05BD43 for ; Fri, 22 May 2020 05:52:21 -0700 (PDT) Received: by mail-lj1-x243.google.com with SMTP id k5so12452105lji.11 for ; Fri, 22 May 2020 05:52:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YzqQsjWsD3KM16gewdvtpsRwCFP82zpuuyGt2UtUvco=; b=j6Temqp5EdCwy2lHEeBRNU/roZu6P35IXqFKlgDFfcRBuTYSUzvoVVn3YuRD/rx9VD A/33zGrbtZ5asrkQE65MmYaLJtBAEY84pZxygcVD/NIjWPOHi2M0V+WECH/S/BiPQcjj ye0nW3hVLvZJ9O5MEjEmTV62uQQDzHRq2b2l05cYTQowGXzbX7D6xJuXQOctBQIUszcW svrImc0ptd9lZd1snb+8vxKeIxuW1Zu0t9X2Cwq5D2IKtwHlftfhGib5sGsFpuwYWQp7 imBMkbE7Z20hpw2drt92fqwRw+0cXJIXL8sTiwxb11wDDXa/PIa0fbCDv1b2cCqwzOGa hvwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YzqQsjWsD3KM16gewdvtpsRwCFP82zpuuyGt2UtUvco=; b=cxgPn7dNA3p41Mhrom4cVCF2PYn7JF7Qwp2RocuVqUZxdXUhFAEBqwtAFnWMU+l1ot 4mylzKlYLnAv/mEiZNJ6qfGNSPudeKltHatIJbDj7PZOGs3h3tApPDlEuCvxEI4hmc4D B6G9SG3SBWaYbG/KjZUrGZVyS7Iuduvro4cjDNvghwh9XJedqZiV7jevSgtfzY4N6wqm fJVtlrmPubmJnASI5z0zNZIZG5eIVRtC99lT5Q1wscBcfulc0g0DHCJoCZvw9FhcQZIj B4A9+qx4TVT9GH+iDkP0YQmciwapaojV2s01SA1t0k2muQAaDrhFG3/ScXGBDWL+3EPa WSMQ== X-Gm-Message-State: AOAM533izF5FKckzpJEQvYUMvTvaIlD3EwTpPG0Mb/f6jdkhk+mb3uws U9H7vpKj4GG6hgi/aPEz5IgVnQ== X-Google-Smtp-Source: ABdhPJysUi0S/B10wTzt4/F1rMdfE2sKWbCTwK5LOYcv3t2ygEQAFnn1cMIJjclfQZFWD15ToOwdhA== X-Received: by 2002:a2e:9a41:: with SMTP id k1mr7755117ljj.143.1590151939515; Fri, 22 May 2020 05:52:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id y21sm428982ljg.48.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id B146A10204F; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 03/16] x86/kvm: Make DMA pages shared Date: Fri, 22 May 2020 15:52:01 +0300 Message-Id: <20200522125214.31348-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make force_dma_unencrypted() return true for KVM to get DMA pages mapped as shared. __set_memory_enc_dec() now informs the host via hypercall if the state of the page has changed from shared to private or back. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/mm/mem_encrypt_common.c | 5 +++-- arch/x86/mm/pat/set_memory.c | 7 +++++++ include/uapi/linux/kvm_para.h | 2 ++ 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index bc72bfd89bcf..86c012582f51 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -799,6 +799,7 @@ config KVM_GUEST depends on PARAVIRT select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL + select X86_MEM_ENCRYPT_COMMON default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index 964e04152417..a878e7f246d5 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -10,14 +10,15 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { /* - * For SEV, all DMA must be to unencrypted/shared addresses. + * For SEV and KVM, all DMA must be to unencrypted/shared addresses. */ - if (sev_active()) + if (sev_active() || kvm_mem_protected()) return true; /* diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index b8c55a2e402d..6f075766bb94 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -1972,6 +1973,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) struct cpa_data cpa; int ret; + if (kvm_mem_protected()) { + unsigned long gfn = __pa(addr) >> PAGE_SHIFT; + int call = enc ? KVM_HC_MEM_UNSHARE : KVM_HC_MEM_SHARE; + return kvm_hypercall2(call, gfn, numpages); + } + /* Nothing to do if memory encryption is not active */ if (!mem_encrypt_active()) return 0; diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 1a216f32e572..c6d8c988e330 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -30,6 +30,8 @@ #define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 #define KVM_HC_ENABLE_MEM_PROTECTED 12 +#define KVM_HC_MEM_SHARE 13 +#define KVM_HC_MEM_UNSHARE 14 /* * hypercalls use architecture specific From patchwork Fri May 22 12:52:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD591138A for ; Fri, 22 May 2020 12:52:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1AEC208C7 for ; Fri, 22 May 2020 12:52:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="T5hCoobM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729948AbgEVMw2 (ORCPT ); Fri, 22 May 2020 08:52:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729363AbgEVMwV (ORCPT ); Fri, 22 May 2020 08:52:21 -0400 Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CADACC08C5C0 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: by mail-lj1-x241.google.com with SMTP id c11so10325968ljn.2 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QDGQlabqNdXWT8KYXiZwEeKnt25k8ILFDWxwAiuz/rs=; b=T5hCoobMWtJ7Xzo20ohRf6NdcQ3ttgHexOKNzdWduVQbkLxApAPl+W+t7qaQWvvPaL szS1ESDyCBl8nxotiDX/fMAWiyPnn53Y573QSOYSRIKxkWL0hFikT7PbX/xhc6GpbupH Jx3l5SbaTAsfUfZ10CSxgW0gxp6Cm2E9v82GsvCLwFYiFiZkeS2r0a/IwAaNcT0g776C 7W0DJCCofxJ5gED2yaCZ/LqHet1TQUSlADKixI+VvxGE2mHgUd5Dkn1QmrHHAhn68P+f a2mkP4+qpHJmX+8ZpqpZJ8eyw6wwk1TMtgSBnlxvgDuZ5J/XSH61k5FXpKubBlHpB7A+ tHsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QDGQlabqNdXWT8KYXiZwEeKnt25k8ILFDWxwAiuz/rs=; b=U+RSgawl92f6V3PSQl4+D6EDVGknT265lPZW14XaI5q7cn/e3sv13qMtJMI1hzk5WO 3cPgxZEK/cTqUmXrhd/Ubuj67u78namVO6OB6ivuTdPZkIoDNG+vk6ukaRDRESeSFu4U Td5ehqm0YWHBFNwI5F4Xk3n83z1iotahMuYyu6U3hOWIImGSn9g3YU9joftXJdSt/MOa H/63RNNy4QRu7y01MxPJErm+HADa99s2No/C3aC9pJMtPUfSTYLIwCtaUrZJamdJWG3Z 9W+H4gUt/o8m7WhlQs+TK85Fink5GdAUt39q0tQI2U6rXmg001EYI8Oo+O6ern3Knofs YOjg== X-Gm-Message-State: AOAM533WmYb03KK3lmd7N0s+iJXZdUNfNF3w83BCSz50nJ+4g86+3qB6 rVyabXlEoGtqwCMg8zoEGdf0lw== X-Google-Smtp-Source: ABdhPJzbEgynOtDCfmMokHSL9w+mx6+aIeiKdBvwClE9qAMTx107s9z4ayxTSo/aYXpRjHq4p8yErA== X-Received: by 2002:a2e:9ada:: with SMTP id p26mr7407068ljj.14.1590151939171; Fri, 22 May 2020 05:52:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e18sm343134lja.55.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id B92F7102051; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 04/16] x86/kvm: Use bounce buffers for KVM memory protection Date: Fri, 22 May 2020 15:52:02 +0300 Message-Id: <20200522125214.31348-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Mirror SEV, use SWIOTLB always if KVM memory protection is enabled. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kernel/kvm.c | 2 ++ arch/x86/kernel/pci-swiotlb.c | 3 ++- arch/x86/mm/mem_encrypt.c | 20 -------------------- arch/x86/mm/mem_encrypt_common.c | 23 +++++++++++++++++++++++ 5 files changed, 28 insertions(+), 21 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 86c012582f51..58dd44a1b92f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -800,6 +800,7 @@ config KVM_GUEST select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL select X86_MEM_ENCRYPT_COMMON + select SWIOTLB default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index bda761ca0d26..f50d65df4412 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -742,6 +743,7 @@ static void __init kvm_init_platform(void) } mem_protected = true; + swiotlb_force = SWIOTLB_FORCE; } } diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..814060a6ceb0 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -13,6 +13,7 @@ #include #include #include +#include int swiotlb __read_mostly; @@ -49,7 +50,7 @@ int __init pci_swiotlb_detect_4gb(void) * buffers are allocated and used for devices that do not support * the addressing range required for the encryption mask. */ - if (sme_active()) + if (sme_active() || kvm_mem_protected()) swiotlb = 1; return swiotlb; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 112304a706f3..35c748ee3fcb 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -370,23 +370,3 @@ void __init mem_encrypt_free_decrypted_mem(void) free_init_pages("unused decrypted", vaddr, vaddr_end); } - -void __init mem_encrypt_init(void) -{ - if (!sme_me_mask) - return; - - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - - /* - * With SEV, we need to unroll the rep string I/O instructions. - */ - if (sev_active()) - static_branch_enable(&sev_enable_key); - - pr_info("AMD %s active\n", - sev_active() ? "Secure Encrypted Virtualization (SEV)" - : "Secure Memory Encryption (SME)"); -} - diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index a878e7f246d5..7900f3788010 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -37,3 +37,26 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask && !kvm_mem_protected()) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); + + /* + * With SEV, we need to unroll the rep string I/O instructions. + */ + if (sev_active()) + static_branch_enable(&sev_enable_key); + + if (sme_me_mask) { + pr_info("AMD %s active\n", + sev_active() ? "Secure Encrypted Virtualization (SEV)" + : "Secure Memory Encryption (SME)"); + } else { + pr_info("KVM memory protection enabled\n"); + } +} From patchwork Fri May 22 12:52:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48D1A90 for ; Fri, 22 May 2020 12:53:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3106B206D5 for ; Fri, 22 May 2020 12:53:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="GtiEZ4n1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730325AbgEVMx5 (ORCPT ); Fri, 22 May 2020 08:53:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729851AbgEVMwX (ORCPT ); Fri, 22 May 2020 08:52:23 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74450C08C5C8 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id a25so834242ljp.3 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TRTK9a5mpqTDY5ZfccpPpqg6MJ7UvrTh0xOAXZSLOT0=; b=GtiEZ4n19ouyAizgKUR7+QtiVY5GOsdnqMTjHrIb62L5xETehoiRwghxpTXWAFsud/ mzMElUVEBDDRWzoET4wNfdBr+wjaqzM/XIt1wcZOSbxfJkSXPhaXh78G/xODWbBkjHWI Ml2DHUVIQJ87N1mWdC9TF590bqRjFqLm61X3ZvCMq8khw6tLBA9ohicbe26vEv2hL8U5 4F9TmwyGNQDDzBo+XOMmQUHnWmvDEW6nCGxTzjxiqD3j423EdCtU2NM74RLdpHP02Q1T BbVYwG/7tf+sGDZcMWGVGYeXypePS7/N7EB2aXkxTZ+AwJIqmQ4ZMG6MMGqCPisBllWD ayxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TRTK9a5mpqTDY5ZfccpPpqg6MJ7UvrTh0xOAXZSLOT0=; b=mWtsjPERLJMFlblHWNPsQP+RrQXOKXwaatkMzlDH6FY/LhTl6C5inaCaf8rkDak487 1sgz9pSmwt+f2f9nrnqWJTbt8hv2fEjL3jDejqQsBrZnCc2E1npncGnI/LbtVrLI7xcv P+yZLTb8+BR2YVZFJ8+Mtm+XOZieFvMv5DDVgCZsbkuEnBTLFXVIqB8QHN8DTPRI4NLQ pTCgsQWkdvMEVmUzyv/P6Wpl7co4wZ4YPkSgn6cp1mWuyFzvq9/d1dqi7FJjL0TPa4ol 06U7XpoQ5BdwrYIVbPoM2lgem0Ns8MbsGLVN/KY9uMHZDbrOEkxtYQqy4qkplmbw1o/I Ssfg== X-Gm-Message-State: AOAM530fvdqk8SA3mUFdEJdLjvnGdVj4Nxev0j5Tg+ctrjkdCSJDOcg6 f3TGiW5E0aJV9l0FYYFOcVApGA== X-Google-Smtp-Source: ABdhPJxgD8Uk4BEWMvgEPW1VD/uD+DbfKpa/h/WYtV9wQd/dZu3w9fynzecAsTLspWi1RmapCaP6QQ== X-Received: by 2002:a2e:9f43:: with SMTP id v3mr7757905ljk.285.1590151940948; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id n8sm2401340lfb.20.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id C0A1C102053; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Date: Fri, 22 May 2020 15:52:03 +0300 Message-Id: <20200522125214.31348-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org VirtIO for KVM is a primary way to provide IO. All memory that used for communication with the host has to be marked as shared. The easiest way to archive that is to use DMA API that already knows how to deal with shared memory. Signed-off-by: Kirill A. Shutemov --- drivers/virtio/virtio_ring.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 58b96baa8d48..bd9c56160107 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef DEBUG /* For development, we want to crash whenever the ring is screwed. */ @@ -255,6 +256,9 @@ static bool vring_use_dma_api(struct virtio_device *vdev) if (xen_domain()) return true; + if (kvm_mem_protected()) + return true; + return false; } From patchwork Fri May 22 12:52:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5B3AD90 for ; Fri, 22 May 2020 12:54:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E42A206C3 for ; Fri, 22 May 2020 12:53:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="0fINIB2q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730333AbgEVMx6 (ORCPT ); Fri, 22 May 2020 08:53:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729846AbgEVMwW (ORCPT ); Fri, 22 May 2020 08:52:22 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16031C08C5C1 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id z18so12439340lji.12 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YfqDw99wlD4f4u2TbTJ6qHze4NSI0wbOg8F4/7GJtp0=; b=0fINIB2q2FFPAT+xSTZ65uNII/27pj5fJVYLosTwProbYATdYgQuDBG3a2I8I3nFzi LWQCOlA7QXSD6k9T6ZSf+p2nLXjsX7MhLfwOiouiIs8DzcihJv8k0oB8V6USr19PlxaN 9TP2nKUTAvfDOhaMkXa6cbW3bC25fYtZje89nBuKcTfHA1uwShTP7dn/JbwrieClk6Oj lDRid2YBwhyiKH3rY0plHD3x66DuxS2/uPUjrPP3CQ9ZguxuqBjXuiQYHPK/JRVHrItW Mptrk8PiJhWHrslmXiyyTVgpNiU4biVgczasRq833LYThCovuhuF9lwnSNRieapCjAaN P7HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YfqDw99wlD4f4u2TbTJ6qHze4NSI0wbOg8F4/7GJtp0=; b=LRUWi4eQjv3lB5WN7gKqAmDHVcqqr46iH4M0lelu9fx6s1QhVhXvuaookwtzFAWQPB AO+cHZdy78cL3P7Ydwy1QjtngglQ7cpXYD2Kcu1BaLMoIfFNdf2ESi1CBJxKamv0hgG+ ynKWAioZDILJ77W4lxhEjqO3yuMf8JA2gN3yiP4u6axyMPDaJByb3m6y1Dd/Q0Q7yOn6 jp+klS9lz0lciGr9o592okcP/dokMJxsFkzx5hkuaHozuFCw8Zfsi7OpiIPMEhjoy0Ld PXyMvUS6nr6E3Fwwl5qNcLMvasSO8BmoERAtnX+PCQYPXFZBjJ6wD+oSWNUQrSI3NS/x 83Eg== X-Gm-Message-State: AOAM533NPT+P9tjwrXfvRej55uKkalrxzOXtcDp0q2VynTASxU4giIHz R7vIr9e281QHUOT118fN+Qw4mw== X-Google-Smtp-Source: ABdhPJzUQtdSq+up9u40Vs7gZXaMufhhVUg0JUUDOWbSlD4zO7U6sTxMHmCVlgCqI/FGQiCPCeitTw== X-Received: by 2002:a2e:97c3:: with SMTP id m3mr6974669ljj.23.1590151940561; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id s8sm2406642lfd.61.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id C8F18102054; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 06/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Date: Fri, 22 May 2020 15:52:04 +0300 Message-Id: <20200522125214.31348-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org New helpers copy_from_guest()/copy_to_guest() to be used if KVM memory protection feature is enabled. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 +++ virt/kvm/kvm_main.c | 78 ++++++++++++++++++++++++++++++++++------ 2 files changed, 72 insertions(+), 10 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 131cc1527d68..bd0bb600f610 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -503,6 +503,7 @@ struct kvm { struct srcu_struct srcu; struct srcu_struct irq_srcu; pid_t userspace_pid; + bool mem_protected; }; #define kvm_err(fmt, ...) \ @@ -727,6 +728,9 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); void kvm_get_pfn(kvm_pfn_t pfn); +int copy_from_guest(void *data, unsigned long hva, int len); +int copy_to_guest(unsigned long hva, const void *data, int len); + void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 731c1e517716..033471f71dae 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2248,8 +2248,48 @@ static int next_segment(unsigned long len, int offset) return len; } +int copy_from_guest(void *data, unsigned long hva, int len) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, 0); + if (npages != 1) + return -EFAULT; + memcpy(data, page_address(page) + offset, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + + return 0; +} + +int copy_to_guest(unsigned long hva, const void *data, int len) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + if (npages != 1) + return -EFAULT; + memcpy(page_address(page) + offset, data, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + return 0; +} + static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) + void *data, int offset, int len, + bool protected) { int r; unsigned long addr; @@ -2257,7 +2297,10 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_from_user(data, (void __user *)addr + offset, len); + if (protected) + r = copy_from_guest(data, addr + offset, len); + else + r = __copy_from_user(data, (void __user *)addr + offset, len); if (r) return -EFAULT; return 0; @@ -2268,7 +2311,8 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -2277,7 +2321,8 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); @@ -2350,7 +2395,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, - const void *data, int offset, int len) + const void *data, int offset, int len, + bool protected) { int r; unsigned long addr; @@ -2358,7 +2404,11 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, addr = gfn_to_hva_memslot(memslot, gfn); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_to_user((void __user *)addr + offset, data, len); + + if (protected) + r = copy_to_guest(addr + offset, data, len); + else + r = __copy_to_user((void __user *)addr + offset, data, len); if (r) return -EFAULT; mark_page_dirty_in_slot(memslot, gfn); @@ -2370,7 +2420,8 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2379,7 +2430,8 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2495,7 +2547,10 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_write_guest(kvm, gpa, data, len); - r = __copy_to_user((void __user *)ghc->hva + offset, data, len); + if (kvm->mem_protected) + r = copy_to_guest(ghc->hva + offset, data, len); + else + r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); @@ -2530,7 +2585,10 @@ int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_read_guest(kvm, ghc->gpa, data, len); - r = __copy_from_user(data, (void __user *)ghc->hva, len); + if (kvm->mem_protected) + r = copy_from_guest(data, ghc->hva, len); + else + r = __copy_from_user(data, (void __user *)ghc->hva, len); if (r) return -EFAULT; From patchwork Fri May 22 12:52:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565603 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48FC7138A for ; Fri, 22 May 2020 12:53:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D1CD206D5 for ; Fri, 22 May 2020 12:53:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="DINCmV2a" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730255AbgEVMxi (ORCPT ); Fri, 22 May 2020 08:53:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729855AbgEVMwY (ORCPT ); Fri, 22 May 2020 08:52:24 -0400 Received: from mail-lf1-x141.google.com (mail-lf1-x141.google.com [IPv6:2a00:1450:4864:20::141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB0DAC08C5C0 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) Received: by mail-lf1-x141.google.com with SMTP id e125so6472914lfd.1 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=DINCmV2aZlY4JnnPCopM4LgQCUWBheVG7kwiY1wVek04AAz6tE+2VQMpV7bu6lmZqF 0jh3mFvw0V7/oxUHvVwm/T6VOC0xmMbvGcg90XXh+J4zuYyWOqQxFAggr5YLDELpvvXx UrVe1p9EpqMd5uoiOA4U7S73PN7iZ0i29Tahu4ktBsbkFkiLShZ3UzAMG/Slnu8VVW+V SWH92LSWWdocDdDjrE7Rw7qJde4opaiV2xmFBHM1cYM3aCXnsJvU6fX8a84z3k0bBTlF L6geZ1hV7k8f/H80wvI+b+bxthM5QDsSWl1c1ikg+E9FBlRqF9A50NARmUYmW8Ibnomm eyvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=Ze9Wk07NaR61UNw4yVSAUAGcfXMC3Qu4qrR+lLBTNZEnXhwDSv/dWS3RE392ARZTZA zfFuYch94l9Zi6++JY8hCYAkO61Wcg468oXcSKsg8BSRlrYJesBp/8U9upWFvAne/QO6 ZbO7646JkMcQafALKE3JWTHN4B8EwvwCjQmjgihsLzSlxffO8y17zAkvufHyH7Eruij/ pOETt+e74vE7BSGXDNQMU8CXo05kEk7CkcFfeoJqFkg64Yy5LhAcHwl0dcAqerrzkiSP H1tPJPQhINtrvdFUXLe9j6sLL3O5ZahmqP9Ns3I0qHQ2EHGGmA+TOf6EGFt4x9x2q8ad mSug== X-Gm-Message-State: AOAM533/3orOK7U1k9xe5NnwrYiOIznH3zL2ZBSqfE3jC051iOWJnxTx jhUb+PYk07CTjmnQrWA8CalYYA== X-Google-Smtp-Source: ABdhPJzP9UaxWq9turrXCg6xZaiwqG/RivqdrtCbCfxbVzjv8F2JanBpprUM2BRBlHfmWRoh2x4YkQ== X-Received: by 2002:a19:3855:: with SMTP id d21mr7581245lfj.156.1590151942236; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id m4sm2307279ljb.46.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D0C22102055; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 07/16] KVM: mm: Introduce VM_KVM_PROTECTED Date: Fri, 22 May 2020 15:52:05 +0300 Message-Id: <20200522125214.31348-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The new VMA flag that indicate a VMA that is not accessible to userspace but usable by kernel with GUP if FOLL_KVM is specified. The FOLL_KVM is only used in the KVM code. The code has to know how to deal with such pages. Signed-off-by: Kirill A. Shutemov --- include/linux/mm.h | 8 ++++++++ mm/gup.c | 20 ++++++++++++++++---- mm/huge_memory.c | 20 ++++++++++++++++---- mm/memory.c | 3 +++ mm/mmap.c | 3 +++ virt/kvm/async_pf.c | 4 ++-- virt/kvm/kvm_main.c | 9 +++++---- 7 files changed, 53 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e1882eec1752..4f7195365cc0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,6 +329,8 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#define VM_KVM_PROTECTED 0 + #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE #endif @@ -646,6 +648,11 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } +static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_KVM_PROTECTED; +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow @@ -2773,6 +2780,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ +#define FOLL_KVM 0x80000 /* access to VM_KVM_PROTECTED VMAs */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 87a6a59fe667..bd7b9484b35a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -385,10 +385,19 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(struct vm_area_struct *vma, + pte_t pte, unsigned int flags) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pte_page(pte)) == 1; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -431,7 +440,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(vma, pte, flags)) { pte_unmap_unlock(ptep, ptl); return NULL; } @@ -751,6 +760,9 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, ctx->page_mask = 0; + if (vma_is_kvm_protected(vma) && (flags & FOLL_KVM)) + flags &= ~FOLL_NUMA; + /* make this handle hugepd */ page = follow_huge_addr(mm, address, flags & FOLL_WRITE); if (!IS_ERR(page)) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6ecd1045113b..c3562648a4ef 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1518,10 +1518,19 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(struct vm_area_struct *vma, + pmd_t pmd, unsigned int flags) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pmd_page(pmd)) == 1; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1534,7 +1543,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(vma, *pmd, flags)) goto out; /* Avoid dumping huge zero page */ @@ -1609,6 +1618,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) bool was_writable; int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(pmd, *vmf->pmd))) goto out_unlock; diff --git a/mm/memory.c b/mm/memory.c index f703fe8c8346..d7228db6e4bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4013,6 +4013,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) bool was_writable = pte_savedwrite(vmf->orig_pte); int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + /* * The "pte" at this point cannot be used safely without * validation through pte_unmap_same(). It's of NUMA type but diff --git a/mm/mmap.c b/mm/mmap.c index f609e9ec4a25..d56c3f6efc99 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -112,6 +112,9 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags) (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | pgprot_val(arch_vm_get_page_prot(vm_flags))); + if (vm_flags & VM_KVM_PROTECTED) + ret = PAGE_NONE; + return arch_filter_pgprot(ret); } EXPORT_SYMBOL(vm_get_page_prot); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..7663e962510a 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -60,8 +60,8 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ down_read(&mm->mmap_sem); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE | FOLL_KVM, NULL, + NULL, &locked); if (locked) up_read(&mm->mmap_sem); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 033471f71dae..530af95efdf3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1727,7 +1727,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w static inline int check_user_page_hwpoison(unsigned long addr) { - int rc, flags = FOLL_HWPOISON | FOLL_WRITE; + int rc, flags = FOLL_HWPOISON | FOLL_WRITE | FOLL_KVM; rc = get_user_pages(addr, 1, flags, NULL, NULL); return rc == -EHWPOISON; @@ -1771,7 +1771,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, bool *writable, kvm_pfn_t *pfn) { - unsigned int flags = FOLL_HWPOISON; + unsigned int flags = FOLL_HWPOISON | FOLL_KVM; struct page *page; int npages = 0; @@ -2255,7 +2255,7 @@ int copy_from_guest(void *data, unsigned long hva, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, 0); + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(data, page_address(page) + offset, seg); @@ -2275,7 +2275,8 @@ int copy_to_guest(unsigned long hva, const void *data, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + npages = get_user_pages_unlocked(hva, 1, &page, + FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(page_address(page) + offset, data, seg); From patchwork Fri May 22 12:52:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565611 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD1B690 for ; Fri, 22 May 2020 12:54:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9716F206D5 for ; Fri, 22 May 2020 12:54:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="BCBMwrmx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730192AbgEVMx5 (ORCPT ); Fri, 22 May 2020 08:53:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729406AbgEVMwX (ORCPT ); Fri, 22 May 2020 08:52:23 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD313C08C5C6 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id q2so12439916ljm.10 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CZiTRxJYlwT1JI2O16YcOTpTGTvsIt3/iftpkk7xc8A=; b=BCBMwrmxMDVQ8OtO/KzaGXdCQjeM5F3cHHB0vfwb+cvf3Z9A3LDLjE7FRdVA24erc1 AwGxByPjg3hQAFjypKvSVkWdRRBvKLHfBx29kditdFzk85rwJq1bXca7C6+N6UmBzHt2 OXnkj1PjOYNeHoVV4Ifh9uH4tgA161gjedRNrQL0TngxOBh8LfMvjqFNZWSE5EVvQVjf /MPNmP2sqqH6b4eEbAFXnNVzihJ3KDwwxVKCXR3tgnkR35deNmWrXA6h1mMyzS88L//S C5oUjasf0WZz8yTkaFbxXPLdszje883xE+ngP1PTwL+ClRS5mxxgTFJiVccDWlLid61Z YBIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CZiTRxJYlwT1JI2O16YcOTpTGTvsIt3/iftpkk7xc8A=; b=l23nFlrQV78PmvnLs9BuoKf7iAIe1CE2eNsc19LALbKHFCgZbV6v0jOqSPaWQagCxi ftKRqCVHy/eyrsX9rKqqJSMk88pyzkv/sFKsiAwh/B05G1CWb8slAcKKJ8l2HaXBiGTo eNwEMCK0Q6DAdAAqlB6WVU2crFdHb2FKvrj/nU2hS+EM40GHPUyWXiv/t0yIFLVZiPTL 4vuvspaMUOZ6pMY2oBDMrb0ZJkTfMLz0Vczln3G1tM699U4BLScPjN8gwG1eM4Ydnr8j f8rft5ItUtXKFVDVJUW86eKrJ6hfqXn2+DXEJ02s7j6Tlui3jZ4u7FzXeO8WF//WZ6Hi 47lg== X-Gm-Message-State: AOAM533YWcNr4WEmkUwp5AVaaeqcLmsGC0iO96rrE0rbXsmso0IbKdon xqupI68uE8aXqpP2hIalZIVU0g== X-Google-Smtp-Source: ABdhPJxTEKdBkbnDP1WqaMmPg43HON2muTRB5YapI1uWS4rRjwdDOETM87HycrYnjbyjjuXd8HuRsw== X-Received: by 2002:a2e:6e17:: with SMTP id j23mr7349735ljc.106.1590151941240; Fri, 22 May 2020 05:52:21 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e18sm343155lja.55.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D8584102056; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 08/16] KVM: x86: Use GUP for page walk instead of __get_user() Date: Fri, 22 May 2020 15:52:06 +0300 Message-Id: <20200522125214.31348-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The user mapping doesn't have the page mapping for protected memory. Signed-off-by: Kirill A. Shutemov --- arch/x86/kvm/mmu/paging_tmpl.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 9bdf9b7d9a96..ef0c5bc8ad7e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -400,8 +400,14 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, goto error; ptep_user = (pt_element_t __user *)((void *)host_addr + offset); - if (unlikely(__get_user(pte, ptep_user))) - goto error; + if (vcpu->kvm->mem_protected) { + if (copy_from_guest(&pte, host_addr + offset, + sizeof(pte))) + goto error; + } else { + if (unlikely(__get_user(pte, ptep_user))) + goto error; + } walker->ptep_user[walker->level - 1] = ptep_user; trace_kvm_mmu_paging_element(pte, walker->level); From patchwork Fri May 22 12:52:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38EB290 for ; Fri, 22 May 2020 12:53:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CFCC206D5 for ; Fri, 22 May 2020 12:53:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="uMPxr4NV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730218AbgEVMxi (ORCPT ); Fri, 22 May 2020 08:53:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729861AbgEVMwY (ORCPT ); Fri, 22 May 2020 08:52:24 -0400 Received: from mail-lf1-x143.google.com (mail-lf1-x143.google.com [IPv6:2a00:1450:4864:20::143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 268B7C08C5CA for ; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: by mail-lf1-x143.google.com with SMTP id w15so6418472lfe.11 for ; Fri, 22 May 2020 05:52:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=uMPxr4NVfF3vE1qVqq7sYsFj93VH+Y34ADY3/+IJkn11M5r/WatHHUjnukprdosb1T yo55gN9SX8HuBKmD+23WnyRvesSBKuSXQVtISqJ7086/9z6VeO2Xww8Ar+v85g91tiJt W0C/h2VXuOHKVnZNbOcSQkmhe8wicTwNh8SySiF7S62NpX2whckYamOo0Y5tYn0NNACH lbYF8qvggQ3Hq+I2RP22XsY014kHkea5U1sutNdpE0tb9CGfLQtUaz+qUWdf4lxp/yZZ EVwfQUaqG5nxl49hfwRyoY83XXfPpb3nZL1+IYFkOn7WLdUmDIZ44pLogc8bXKvC0BG/ sDgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=eO0ePm30JhrOI+/neREllZR3lHyU/hOZXNYv0qlXDkmHg+jgC+YtZiZXHh0WDnyxtV 6yRxLwEdZZ3lks4iy3DG5tP/uF2v3Ls/chy0AskdvCfqN4VpNFo7HynZG4N9LbaseOib lBSUes/pwCE+6WwnVw5AW8/se+xgNmfu7YVuoBF9xSqJqa7KygWtSza256Ol0uxuwjCJ Sp0AFsTmQhlNHs9Y6aoumsvi6bhxW+xmXA2hr5SriCTp08R5kUa23ziZpSEFW/3OgBFN l0gtKASNb8325AvcaWN+0FMu3GseA9UkgNzogBvDo2W4GkIZwx+9l/r3vnTJgWeEzLuA /bvQ== X-Gm-Message-State: AOAM532nFIVtTQWMdK7jsvwyQE4DIV0M985fQDOFNwFSuhG6bRMLQW7R vVW19D8T0CGYt3nDgVP0Es6GTw== X-Google-Smtp-Source: ABdhPJwQqkBG5wVjIjX0BwokaMhLdwCURG8El+OhoC+P0UogOvR1pNP5w8yY/1CvTRn+oqftAvC5tA== X-Received: by 2002:a19:c64c:: with SMTP id w73mr7336911lff.67.1590151942584; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id i11sm2644335ljg.9.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:21 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E0440102057; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 09/16] KVM: Protected memory extension Date: Fri, 22 May 2020 15:52:07 +0300 Message-Id: <20200522125214.31348-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add infrastructure that handles protected memory extension. Arch-specific code has to provide hypercalls and define non-zero VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 ++ mm/mprotect.c | 1 + virt/kvm/kvm_main.c | 131 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 136 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bd0bb600f610..d7072f6d6aa0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -700,6 +700,10 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); +int kvm_protect_all_memory(struct kvm *kvm); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/mm/mprotect.c b/mm/mprotect.c index 494192ca954b..552be3b4c80a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -505,6 +505,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, vm_unacct_memory(charged); return error; } +EXPORT_SYMBOL_GPL(mprotect_fixup); /* * pkey==-1 when doing a legacy mprotect() diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 530af95efdf3..07d45da5d2aa 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -155,6 +155,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; +static int protect_memory(unsigned long start, unsigned long end, bool protect); + __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable) { @@ -1309,6 +1311,14 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out_bitmap; + if (mem->memory_size && kvm->mem_protected) { + r = protect_memory(new.userspace_addr, + new.userspace_addr + new.npages * PAGE_SIZE, + true); + if (r) + goto out_bitmap; + } + if (old.dirty_bitmap && !new.dirty_bitmap) kvm_destroy_dirty_bitmap(&old); return 0; @@ -2652,6 +2662,127 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +static int protect_memory(unsigned long start, unsigned long end, bool protect) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + int ret; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + + ret = -ENOMEM; + vma = find_vma(current->mm, start); + if (!vma) + goto out; + + ret = -EINVAL; + if (vma->vm_start > start) + goto out; + + if (start > vma->vm_start) + prev = vma; + else + prev = vma->vm_prev; + + ret = 0; + while (true) { + unsigned long newflags, tmp; + + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + + newflags = vma->vm_flags; + if (protect) + newflags |= VM_KVM_PROTECTED; + else + newflags &= ~VM_KVM_PROTECTED; + + /* The VMA has been handled as part of other memslot */ + if (newflags == vma->vm_flags) + goto next; + + ret = mprotect_fixup(vma, &prev, start, tmp, newflags); + if (ret) + goto out; + +next: + start = tmp; + if (start < prev->vm_end) + start = prev->vm_end; + + if (start >= end) + goto out; + + vma = prev->vm_next; + if (!vma || vma->vm_start != start) { + ret = -ENOMEM; + goto out; + } + } +out: + up_write(&mm->mmap_sem); + return ret; +} + +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect) +{ + struct kvm_memory_slot *memslot; + unsigned long start, end; + gfn_t numpages; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + if (!npages) + return 0; + + memslot = gfn_to_memslot(kvm, gfn); + /* Not backed by memory. It's okay. */ + if (!memslot) + return 0; + + start = gfn_to_hva_many(memslot, gfn, &numpages); + end = start + npages * PAGE_SIZE; + + /* XXX: Share range across memory slots? */ + if (WARN_ON(numpages < npages)) + return -EINVAL; + + return protect_memory(start, end, protect); +} +EXPORT_SYMBOL_GPL(kvm_protect_memory); + +int kvm_protect_all_memory(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + unsigned long start, end; + int i, ret = 0;; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + mutex_lock(&kvm->slots_lock); + kvm->mem_protected = true; + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + start = memslot->userspace_addr; + end = start + memslot->npages * PAGE_SIZE; + ret = protect_memory(start, end, true); + if (ret) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_protect_all_memory); + void kvm_sigset_activate(struct kvm_vcpu *vcpu) { if (!vcpu->sigset_active) From patchwork Fri May 22 12:52:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565593 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83B9B138A for ; Fri, 22 May 2020 12:53:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D29720756 for ; Fri, 22 May 2020 12:53:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="w7+V51LF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730038AbgEVMxM (ORCPT ); Fri, 22 May 2020 08:53:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729927AbgEVMw1 (ORCPT ); Fri, 22 May 2020 08:52:27 -0400 Received: from mail-lj1-x243.google.com (mail-lj1-x243.google.com [IPv6:2a00:1450:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8345FC08C5D1 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) Received: by mail-lj1-x243.google.com with SMTP id c11so10326293ljn.2 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hp9LVrV/jryYEOmKqMZilHNSbd08lh9WQOd5VsCx30A=; b=w7+V51LF61oIWXbD/uIzaRfC1I5yP8CeMVMcem9dRCqApl+zLMn39ALMj56hHtJ9Qf Gyb+f7KQfxfRYMPPlqUwSJltzR+TxpbCxAwW2trP0D5VP1QRHIqZblwOCnhLP5TSw2ox tSDHd0F/H4fqN+mgyYgPjTGtZZCBTHLGg8/fSKOx0jH5hz8hgx3KHFfdloZtRSES0pIf 2aOwZG4Q3FRMLhqztET2rmONddbkJeBUOmSg+4wYmi51JmHJDjcjjy3TIZNuwPhxGj2y UD5n8gyfyOquHbuAKSgJrf3yvDpZ2xnmbdfHr9HKoYZrbgZBrLkqru1euBjF0BiEeAhZ lS/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hp9LVrV/jryYEOmKqMZilHNSbd08lh9WQOd5VsCx30A=; b=Solel56hbmVXv0tY/QYInVx6X83H6BENnY9QrSaKLPWIfcBY+GpdVcFhQLNMT+b/WT aashAT1IrKKBotnzqFub9otpOL3i87w6SWinWLOKxQo6oSbReV4h4IMey/wtPg4+H2q1 BeCGO+yLQcZWcsTKwARJU6/SeHlgpW76cT9+uryVv3S1BtVz0FlA0ceu9SqvaYFNvMbo NmmpfOvTp08YsipbPpUNFqS7HCplZki4rztB7aIUuXDqF8LRMNM26rSThqlIvs68Bbx7 GH0qluJEshmZbfLGNcG3MSG+csGWp8S+pqDQrZWaCRhY7K9OQTQuLYxkhoyStDP3aK6G k70w== X-Gm-Message-State: AOAM532QxazHtMN+eEn2e23WMbsURnyRG4+i4/yRKY2isNhHc1aCQZnp a+/2nUQVRZrb2eYYVPWxcBa3Ew== X-Google-Smtp-Source: ABdhPJxjv+o2Z6h/SDQekvHRlmANly8ZZinNLfWWZwgF4Ua2uGZvW18C8lVOUb91lDab94je8Nxavg== X-Received: by 2002:a05:651c:547:: with SMTP id q7mr5071399ljp.437.1590151944946; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id w15sm1266864ljj.57.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E7E01102058; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 10/16] KVM: x86: Enabled protected memory extension Date: Fri, 22 May 2020 15:52:08 +0300 Message-Id: <20200522125214.31348-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Wire up hypercalls for the feature and define VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kvm/cpuid.c | 3 +++ arch/x86/kvm/x86.c | 9 +++++++++ include/linux/mm.h | 4 ++++ 4 files changed, 17 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 58dd44a1b92f..420e3947f0c6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -801,6 +801,7 @@ config KVM_GUEST select ARCH_CPUIDLE_HALTPOLL select X86_MEM_ENCRYPT_COMMON select SWIOTLB + select ARCH_USES_HIGH_VMA_FLAGS default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 901cd1fdecd9..94cc5e45467e 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) (1 << KVM_FEATURE_POLL_CONTROL) | (1 << KVM_FEATURE_PV_SCHED_YIELD); + if (VM_KVM_PROTECTED) + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); + if (sched_info_on()) entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c17e6eb9ad43..acba0ac07f61 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) kvm_sched_yield(vcpu->kvm, a0); ret = 0; break; + case KVM_HC_ENABLE_MEM_PROTECTED: + ret = kvm_protect_all_memory(vcpu->kvm); + break; + case KVM_HC_MEM_SHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); + break; + case KVM_HC_MEM_UNSHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); + break; default: ret = -KVM_ENOSYS; break; diff --git a/include/linux/mm.h b/include/linux/mm.h index 4f7195365cc0..6eb771c14968 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 +#else #define VM_KVM_PROTECTED 0 +#endif #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE From patchwork Fri May 22 12:52:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565599 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8399138A for ; Fri, 22 May 2020 12:53:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8BDE520759 for ; Fri, 22 May 2020 12:53:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="lp2cPjep" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730153AbgEVMxd (ORCPT ); Fri, 22 May 2020 08:53:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729908AbgEVMw0 (ORCPT ); Fri, 22 May 2020 08:52:26 -0400 Received: from mail-lf1-x142.google.com (mail-lf1-x142.google.com [IPv6:2a00:1450:4864:20::142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E41FC05BD43 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: by mail-lf1-x142.google.com with SMTP id c21so6458698lfb.3 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xSlGXwZufpWJ/6PcI9wa5h49Q8Cff6k8tuJGZLYF0wg=; b=lp2cPjep8pO3ef7efxdX23yH12G8mY0S4LTqFaOt/6xv78Aem/EuLlfjxwRSjMbbX6 c/dhpm9jzQrXTaRqmuhQnVhYJ57/t1GipR/F3oPpH3D9CpdHSxRqWtAWWfxqBHwcX35q N/mtjNJ0ytEFesRnXoV+7dcOCh8RsSxoqPlYhBDXhNelnTH6s4nT8zWyM/ICFQNSBqOB wpqX8Yb6D2a4QNiHpDYDllg292XKYpO5BULq04WYMN60SYONE1D1z/40f5Ghu0nEf2eD cY62ID2aNCkj5MbDfWn54C9PukEjzMouBZ+xToc5f1RZETPmdrUqEoHRwkg9Cd6Z6A7V SOKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xSlGXwZufpWJ/6PcI9wa5h49Q8Cff6k8tuJGZLYF0wg=; b=F0aV6krdrFlqXTXjCW7YhSZYw8m6BK3HC2MnNLa3F22wjtGdXFPpobzfFpZ+efbp0+ iRFeQpJxhQwAuPNq6e47CIg7NHGEckzU9io1m/dRlfacSeWF5fpbzBiEVoFmm2OdTQIp 98AZLsvYMaPY8sCikURrE6yjOTEJpibuJgOP1bIK9tS6mwB9QMmIiSRfUhIGQ1R+xmwJ x6oc0YLMYzXzRsW5fo8aSKxTPcdtgeJxJ6a7kYNZfY6t7n3tNYzBfuopiYrdZ58MUdhe xlu+pK+QTnD7bHSuDJHPKl7lcfrIAYbxgYGbPnvGwBKBMqK+yjmUHDDmXMNFrQn7h4PE uq/Q== X-Gm-Message-State: AOAM5322AgxY0UbgrtK0DCfmsQEHFlCk9b2y1w4KCJsUvNCwoNwSg+/S qylpYoz3T9ASCAvIX09d5wXzhw== X-Google-Smtp-Source: ABdhPJz0KS216ZdFcrTTxidcO5MGn8K/8zWzWL1hNqwpMUy/NI3epk39K070+VkyxSdfYpnbmRHWxQ== X-Received: by 2002:a05:6512:14c:: with SMTP id m12mr7492997lfo.165.1590151944083; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id h8sm1020840ljg.28.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id EF8CA102059; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 11/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Date: Fri, 22 May 2020 15:52:09 +0300 Message-Id: <20200522125214.31348-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We are going unmap guest pages from direct mapping and cannot rely on it for guest memory access. Use temporary kmap_atomic()-style mapping to access guest memory. Signed-off-by: Kirill A. Shutemov --- virt/kvm/kvm_main.c | 57 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 07d45da5d2aa..63282def3760 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2258,17 +2258,45 @@ static int next_segment(unsigned long len, int offset) return len; } +static pte_t **guest_map_ptes; +static struct vm_struct *guest_map_area; + +static void *map_page_atomic(struct page *page) +{ + pte_t *pte; + void *vaddr; + + preempt_disable(); + pte = guest_map_ptes[smp_processor_id()]; + vaddr = guest_map_area->addr + smp_processor_id() * PAGE_SIZE; + set_pte(pte, mk_pte(page, PAGE_KERNEL)); + return vaddr; +} + +static void unmap_page_atomic(void *vaddr) +{ + pte_t *pte = guest_map_ptes[smp_processor_id()]; + set_pte(pte, __pte(0)); + __flush_tlb_one_kernel((unsigned long)vaddr); + preempt_enable(); +} + int copy_from_guest(void *data, unsigned long hva, int len) { int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; while ((seg = next_segment(len, offset)) != 0) { npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(data, page_address(page) + offset, seg); + + vaddr = map_page_atomic(page); + memcpy(data, vaddr + offset, seg); + unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -2283,13 +2311,18 @@ int copy_to_guest(unsigned long hva, const void *data, int len) int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; while ((seg = next_segment(len, offset)) != 0) { npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(page_address(page) + offset, data, seg); + + vaddr = map_page_atomic(page); + memcpy(vaddr + offset, data, seg); + unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -4921,6 +4954,18 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (r) goto out_free; + if (VM_KVM_PROTECTED) { + guest_map_ptes = kmalloc_array(num_possible_cpus(), + sizeof(pte_t *), GFP_KERNEL); + if (!guest_map_ptes) + goto out_unreg; + + guest_map_area = alloc_vm_area(PAGE_SIZE * num_possible_cpus(), + guest_map_ptes); + if (!guest_map_ptes) + goto out_unreg; + } + kvm_chardev_ops.owner = module; kvm_vm_fops.owner = module; kvm_vcpu_fops.owner = module; @@ -4944,6 +4989,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, return 0; out_unreg: + if (guest_map_area) + free_vm_area(guest_map_area); + if (guest_map_ptes) + kfree(guest_map_ptes); kvm_async_pf_deinit(); out_free: kmem_cache_destroy(kvm_vcpu_cache); @@ -4965,6 +5014,10 @@ EXPORT_SYMBOL_GPL(kvm_init); void kvm_exit(void) { + if (guest_map_area) + free_vm_area(guest_map_area); + if (guest_map_ptes) + kfree(guest_map_ptes); debugfs_remove_recursive(kvm_debugfs_dir); misc_deregister(&kvm_dev); kmem_cache_destroy(kvm_vcpu_cache); From patchwork Fri May 22 12:52:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565585 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5064890 for ; Fri, 22 May 2020 12:52:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A90A206C3 for ; Fri, 22 May 2020 12:52:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="i0Xow3WA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730119AbgEVMw5 (ORCPT ); Fri, 22 May 2020 08:52:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729946AbgEVMw1 (ORCPT ); Fri, 22 May 2020 08:52:27 -0400 Received: from mail-lf1-x141.google.com (mail-lf1-x141.google.com [IPv6:2a00:1450:4864:20::141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0E6EC02A198 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) Received: by mail-lf1-x141.google.com with SMTP id c12so6437754lfc.10 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+GYG0qSSXXWNbW6SjclZipkt2clARhTBGL72G1TzR+g=; b=i0Xow3WAyL0kpKfuXIdY3nsoVq7W+EV6oRNEhiS9/GqxAvqsfGsFEey4lDq1sbsAgR c2wOtAadryEu9FVsEyb1ufe/OZRu6HsI0zAsPfnJX9Cza8D1+FE0u0l9tdERa4Ih4Qn7 fRUKi+BP4n4l/2k0Hne8Q33RiCfF5XEv1W2P9dxFOKQTZJRUJ0/vO6IRlqN1zBseWmfP 8B17qjaBI3U81aZuJDuNQwNREoT4KhZYstsVtwwjfCtaska2sJP88wjLooXeBtpWQPjW 0N9lRb6YrjPKdWl3ZP52L/8qV5vUTrMVgo4oObShrgOoEz+q8mEDmzi+0iTqjyjvzrfw 41/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+GYG0qSSXXWNbW6SjclZipkt2clARhTBGL72G1TzR+g=; b=hDsmNsSrakdBhvm31mwNAhyHImOLvZScNecb577owB6e0Au39NOjZfi7DlqM6a4XjA aETPbGDDHBzSKouhx4a8fNFQKoz75k1FMMPtVptVS+iIRWJCO0/s6B3/OrvEZrCopY9M bjuXyT+F09aNkam6WaRtvvNytiL4FFx42aVSVnefh1lJqxueh/in61T+lgjv0jmecuZb MTd/AM9I7lcDIyd8ABw9H/kdf3wLH0TjZgl/jpnScnYu/UFfNSrY+TstIXTk/4xvRWKY 8Alk+K9h3HmAmlenp6USfV7t5wyTGOExxVXQ60v1bGpfqMvya+kSqvOLOE4yYSHVuhbD wOgA== X-Gm-Message-State: AOAM530TpaW/pXXtytEyhJs57IWinDEET+OAQkSCvwXUjCxcoh3FoXde 7U2PabAg51Gh5V5Y/o+XRthMmw== X-Google-Smtp-Source: ABdhPJycCB0b0poeoDUYOr547M1WglFr8iGoptYLIwiwPw+BjbaimdNLYbqeeMIrOC3lCpI+0PYUJQ== X-Received: by 2002:a19:6b14:: with SMTP id d20mr1776578lfa.202.1590151945216; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v28sm2405723lfd.35.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 02FC710205A; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 12/16] x86/kvm: Share steal time page with host Date: Fri, 22 May 2020 15:52:10 +0300 Message-Id: <20200522125214.31348-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org struct kvm_steal_time is shared between guest and host. Mark it as shared. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/kvm.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index f50d65df4412..b0f445796ed1 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -286,11 +286,15 @@ static void kvm_register_steal_time(void) { int cpu = smp_processor_id(); struct kvm_steal_time *st = &per_cpu(steal_time, cpu); + unsigned long phys; if (!has_steal_clock) return; - wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED)); + phys = slow_virt_to_phys(st); + if (kvm_mem_protected()) + kvm_hypercall2(KVM_HC_MEM_SHARE, phys >> PAGE_SHIFT, 1); + wrmsrl(MSR_KVM_STEAL_TIME, (phys | KVM_MSR_ENABLED)); pr_info("kvm-stealtime: cpu %d, msr %llx\n", cpu, (unsigned long long) slow_virt_to_phys(st)); } From patchwork Fri May 22 12:52:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565597 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9626F90 for ; Fri, 22 May 2020 12:53:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 813752072C for ; Fri, 22 May 2020 12:53:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="RPoE0F31" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730195AbgEVMxV (ORCPT ); Fri, 22 May 2020 08:53:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729922AbgEVMw0 (ORCPT ); Fri, 22 May 2020 08:52:26 -0400 Received: from mail-lj1-x242.google.com (mail-lj1-x242.google.com [IPv6:2a00:1450:4864:20::242]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE3A6C08C5C1 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: by mail-lj1-x242.google.com with SMTP id c11so10326248ljn.2 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=RPoE0F31yzBfOvWe7SLr/WKR/C2Sx4Gh7bwAeclpxq0/Vb+GNjxAJuEDdywAhAV+rG LPSwpxjdhaPrO6tPK9kLyU9lN+J2ICMlxxas0V3l5kGBMo3GGtqma7ZKf2e7FtyzTZqS KG5R8274XsAkdGpxCT1EQsqp3V4eoAOQ4oqvJ4wtWaqB9Ir9xHzwM7bFrSpATnNTj6er 1QQiE6gfmsKimMLDlHETHNkpXH6oKGpbXDEkowiisx49AcyncgVvc25A2cOV7mVWKXHV 79ghSiISaygzuziCXm5Lg5/wF/eQqqmW0HwJzq/V+XbVuTqhqr1DrqhesngodJsRG9EX SMCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=Ap7dnJHJCVusquAUguMotGrjrAMKFgEE/OAgpPL6hn534+Oixv2K6lUlxTMGPkJsSp r576uy5cADE/7pDDl4/J8Ei0nfSjEK2ydktATI/gR71YjAnkTcT2RPK701dUGP+JFSG7 Wio3PPV4PxUB3X6JmSa9/yG9O+UyILd0x6SFu6Q/i2d0QG4mPK7BfiGppJXCtD5aWNVa i8OGStlj4LJOiEn9b7rfM7d6Gmj/66DT/IgJw0q5x04peHfgeo8iCJVp+NLX6kno66Py ssR4XpZWWrtNdJl78sR1XH0CYJgRLJEXB1L5nhSPPReR89pCZAzI1GmebbM4fulZx+77 U5mg== X-Gm-Message-State: AOAM533cP9TIle+VeSbSluplAqoonPvPeifReS9hX6+d9ksAZ1Cf1cAh rvv/kTMc26rw7Shu4TCFwnaEYA== X-Google-Smtp-Source: ABdhPJxgFPaCjAkcKRCcIN9VQB3EgCJQ/VVHg9h7HUrXezHSNTMlbVQq98oVAeqSL+RKdW5J2/HvFQ== X-Received: by 2002:a05:651c:2ce:: with SMTP id f14mr7217431ljo.87.1590151944358; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id t22sm2303766ljk.11.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 0ABFA10205B; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 13/16] x86/kvmclock: Share hvclock memory with the host Date: Fri, 22 May 2020 15:52:11 +0300 Message-Id: <20200522125214.31348-14-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org hvclock is shared between the guest and the hypervisor. It has to be accessible by host. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/kvmclock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index 34b18f6eeb2c..ac6c2abe0d0f 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -253,7 +253,7 @@ static void __init kvmclock_init_mem(void) * hvclock is shared between the guest and the hypervisor, must * be mapped decrypted. */ - if (sev_active()) { + if (sev_active() || kvm_mem_protected()) { r = set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { From patchwork Fri May 22 12:52:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565573 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3645159A for ; Fri, 22 May 2020 12:52:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88FB120756 for ; Fri, 22 May 2020 12:52:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="nAvZfwpo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730019AbgEVMwi (ORCPT ); Fri, 22 May 2020 08:52:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729955AbgEVMw2 (ORCPT ); Fri, 22 May 2020 08:52:28 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDA39C02A19D for ; Fri, 22 May 2020 05:52:27 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id m12so10088399ljc.6 for ; Fri, 22 May 2020 05:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iuw1xkRjcXmsK4stEeuiGW4w7Bgw3ocR3EwBbivjZB8=; b=nAvZfwpo2XufBEG9O4I+bP/QA3rUDiglZWAfRAUbbmxFJHVM7MjJX6jw9D+v83wIEh bLAe30hWjHZbCfPlen5i0Pk/uCZMnaHGf+QEwyDoeKzBvpuREkRedR6H+OFtWMTQ9B1A V0TJrPi4cZ8U+PtvIXGZ0+HBjbkEp5kHOmiNTiI4Z4ed72WscM9psLvvGE7W4WJZy1t9 AAGCxTd5v19W2bwiyUOA3qjxhuuygCbECxupUCzA4tTRbc0DEx33sLJ1UrDrpBoXbCpa 7wl1kPLvtGIQPMjEhtvLJn57b7mKbvxjyERMnAY8TmQ6cKzezAJWQSNWeN0WzBFY62q8 zoFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iuw1xkRjcXmsK4stEeuiGW4w7Bgw3ocR3EwBbivjZB8=; b=sASCnf1HvflHnmfAtesncDAf7jkkIr1EuW+oRR3bTy6K+KUT7vEt7c1IySFeJjJhf/ fQGNM/VnEqd+9uHv3r7F3Z7DfxlppXn2+UDQpPJuLnwffntDVYo9t1z6LxlimlMiw2gX JKo3SQZHBGeb2L1O0Wh1MkBu9QrQaBQjwD9u5FRGwIynAA5etJHO873yCkQv5N6wJswD s9n7mrERQ6NZL1hI7bRa4YJAbHdElg7HtaDsE22VMGiKkbfdUDJIy5m8TfTAlJNEqoCr CU60ZKJhBbEW5a6H6H7K+zFdbfu1CfvimMXOAGqtZv0pCtq+KXTOoxESwhN6LHdy0xxm +P9g== X-Gm-Message-State: AOAM532hFED/k4uXPVmwskuAlr3eRDhhY2R6CVMNh68JZ2lPdmTNOOme K5joYkkO8P3hg7a8mcsfJdh49Q== X-Google-Smtp-Source: ABdhPJwqihqvAsOaLlmGwWVOsCLhtqXYamXtJyWkA59wywoSZ5SCKVLtje40pqAx/jhBiNJOmWuLpg== X-Received: by 2002:a2e:7a02:: with SMTP id v2mr5172371ljc.374.1590151946210; Fri, 22 May 2020 05:52:26 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j10sm2312515ljc.21.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 1206D10205C; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 14/16] KVM: Introduce gfn_to_pfn_memslot_protected() Date: Fri, 22 May 2020 15:52:12 +0300 Message-Id: <20200522125214.31348-15-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The new interface allows to detect if the page is protected. A protected page cannot be accessed directly by the host: it has to be mapped manually. This is preparation for the next patch. Signed-off-by: Kirill A. Shutemov --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++-- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 35 ++++++++++++++++++-------- 5 files changed, 32 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 2b35f9bcf892..e9a13ecf812f 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -587,7 +587,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, } else { /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok); + writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index aa12cd4078b3..58f8df466a94 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -798,7 +798,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p); + writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8071952e9cf2..0fc095a66a3c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4096,7 +4096,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); async = false; - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable, + NULL); if (!async) return false; /* *pfn has correct page already */ @@ -4110,7 +4111,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, return true; } - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable, + NULL); return false; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d7072f6d6aa0..eca18ef9b1f4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -724,7 +724,7 @@ kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable); + bool *writable, bool *protected); void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 63282def3760..8bcf3201304a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1779,9 +1779,10 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, bool *protected, kvm_pfn_t *pfn) { unsigned int flags = FOLL_HWPOISON | FOLL_KVM; + struct vm_area_struct *vma; struct page *page; int npages = 0; @@ -1795,9 +1796,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (async) flags |= FOLL_NOWAIT; - npages = get_user_pages_unlocked(addr, 1, &page, flags); - if (npages != 1) + down_read(¤t->mm->mmap_sem); + npages = get_user_pages(addr, 1, flags, &page, &vma); + if (npages != 1) { + up_read(¤t->mm->mmap_sem); return npages; + } + if (protected) + *protected = vma_is_kvm_protected(vma); + up_read(¤t->mm->mmap_sem); /* map read fault as writable if possible */ if (unlikely(!write_fault) && writable) { @@ -1888,7 +1895,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * whether the mapping is writable. */ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) + bool write_fault, bool *writable, bool *protected) { struct vm_area_struct *vma; kvm_pfn_t pfn = 0; @@ -1903,7 +1910,8 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, write_fault, writable, protected, + &pfn); if (npages == 1) return pfn; @@ -1937,7 +1945,7 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable) + bool *writable, bool *protected) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -1960,7 +1968,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, async, write_fault, - writable); + writable, protected); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); @@ -1968,19 +1976,26 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable); + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); +static kvm_pfn_t gfn_to_pfn_memslot_protected(struct kvm_memory_slot *slot, + gfn_t gfn, bool *protected) +{ + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, + protected); +} + kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); From patchwork Fri May 22 12:52:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565591 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7845E90 for ; Fri, 22 May 2020 12:53:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D42A20756 for ; Fri, 22 May 2020 12:53:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="k/PBhVH4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730064AbgEVMw5 (ORCPT ); Fri, 22 May 2020 08:52:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729964AbgEVMw2 (ORCPT ); Fri, 22 May 2020 08:52:28 -0400 Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B1B4C02A19B for ; Fri, 22 May 2020 05:52:27 -0700 (PDT) Received: by mail-lj1-x241.google.com with SMTP id z18so12439637lji.12 for ; Fri, 22 May 2020 05:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WBg6Grrpfv0UYLyySue87ooSEtb1PaH0oM+7q4JuIvE=; b=k/PBhVH4U/84hhUDy2iNb6HmIdHpCtBme2OP7Q9seCU+mJ5bOwnJL0bkIYzY4tQzQd AMzF0kp3hxSOb/UdlBe3T50OcmZy7ZQXruO8tyi/YjR7rEj/6Flotp02nt90TGQfWa4V ghNFpOeIsz7O57IG/xOWpo/82qT8dM3ouBDmuc4hGLo6ga6Spz5pPGV6GpsCySA4/uy0 g2/jU3t+DFQldIRJEWbDz5sU+QxU8yjwcXCUawNI9urbOdCIm4lXK39VbLyiZ7g8kkT5 fPELafo+80qdXYx+Ta3tnd+1UQkuXvVbgUS2V+cGTdWJWBcSm4HDrl3GFlsaOmFW/Iuv Xlrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WBg6Grrpfv0UYLyySue87ooSEtb1PaH0oM+7q4JuIvE=; b=FJ4t+tMgp0f93JvRJCJv1i2xk/j1Mx1Qb5STEwHhuCSzQikUyNctnVtSR298zRfyhH xCb8i1oH9ZYwyYGmWJE/xdvx/eL8eDbL862oY06/xxRnkKGxTmdXWAvBb9vEcxbaEZXr hlY7TASKdJ+q69jqVrei79OXcsegCHVm4OMKHW6SoSWcG3TBrdu2BBq/VGtvMF3Fg5qf 3D+uiARHeB2ZrpmzhDPAMV4bJ5qLZEY3Kz+zGyaVXjP4ia1/skEhMZrzFxQRg5/U6rZ0 BLYUGt9S3oA55JfCuV0Pk9jfwB/qvysI+guJDiZW528xz9PZKKq+b8TxbFfH8jY8LeNG Sjuw== X-Gm-Message-State: AOAM531hEoo+sBZCd9+R0VbqBqYQYAJsmoZnjIBYdRp+eCLSoHxXTlX8 hsBZjkdxXYfj9DRoXQqN89go4w== X-Google-Smtp-Source: ABdhPJwNm4ecA72ZG7EXOMbXL+0nWYPqAQuoYcz8FiPFtT8yxBCXefIBszRD/aNFrU9Q+/CufU2ZrQ== X-Received: by 2002:a2e:3a08:: with SMTP id h8mr5865621lja.1.1590151945625; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v5sm1441492ljh.131.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 19BE810205D; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 15/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Date: Fri, 22 May 2020 15:52:13 +0300 Message-Id: <20200522125214.31348-16-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We cannot access protected pages directly. Use ioremap() to create a temporary mapping of the page. The mapping is destroyed on __kvm_unmap_gfn(). The new interface gfn_to_pfn_memslot_protected() is used to detect if the page is protected. ioremap_cache_force() is a hack to bypass IORES_MAP_SYSTEM_RAM check in the x86 ioremap code. We need a better solution. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/io.h | 2 ++ arch/x86/include/asm/pgtable_types.h | 1 + arch/x86/mm/ioremap.c | 16 +++++++++++++--- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 14 +++++++++++--- 5 files changed, 28 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index c58d52fd7bf2..a3e1bfad1026 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -184,6 +184,8 @@ extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size); #define ioremap_uc ioremap_uc extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size); #define ioremap_cache ioremap_cache +extern void __iomem *ioremap_cache_force(resource_size_t offset, unsigned long size); +#define ioremap_cache_force ioremap_cache_force extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val); #define ioremap_prot ioremap_prot extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size); diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index b6606fe6cfdf..66cc22abda7b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -147,6 +147,7 @@ enum page_cache_mode { _PAGE_CACHE_MODE_UC = 3, _PAGE_CACHE_MODE_WT = 4, _PAGE_CACHE_MODE_WP = 5, + _PAGE_CACHE_MODE_WB_FORCE = 6, _PAGE_CACHE_MODE_NUM = 8 }; diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 18c637c0dc6f..e48fc0e130b2 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -202,9 +202,12 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, __ioremap_check_mem(phys_addr, size, &io_desc); /* - * Don't allow anybody to remap normal RAM that we're using.. + * Don't allow anybody to remap normal RAM that we're using, unless + * _PAGE_CACHE_MODE_WB_FORCE is used. */ - if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { + if (pcm == _PAGE_CACHE_MODE_WB_FORCE) { + pcm = _PAGE_CACHE_MODE_WB; + } else if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n", &phys_addr, &last_addr); return NULL; @@ -419,6 +422,13 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size) } EXPORT_SYMBOL(ioremap_cache); +void __iomem *ioremap_cache_force(resource_size_t phys_addr, unsigned long size) +{ + return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB_FORCE, + __builtin_return_address(0), false); +} +EXPORT_SYMBOL(ioremap_cache_force); + void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size, unsigned long prot_val) { @@ -467,7 +477,7 @@ void iounmap(volatile void __iomem *addr) p = find_vm_area((void __force *)addr); if (!p) { - printk(KERN_ERR "iounmap: bad address %p\n", addr); + printk(KERN_ERR "iounmap: bad address %px\n", addr); dump_stack(); return; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index eca18ef9b1f4..b6944f88033d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -237,6 +237,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool protected; }; /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8bcf3201304a..71aac117357f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2091,6 +2091,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn); + bool protected = false; u64 gen = slots->generation; if (!map) @@ -2107,12 +2108,16 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, } else { if (atomic) return -EAGAIN; - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = gfn_to_pfn_memslot_protected(slot, gfn, &protected); } if (is_error_noslot_pfn(pfn)) return -EINVAL; - if (pfn_valid(pfn)) { + if (protected) { + if (atomic) + return -EAGAIN; + hva = ioremap_cache_force(pfn_to_hpa(pfn), PAGE_SIZE); + } else if (pfn_valid(pfn)) { page = pfn_to_page(pfn); if (atomic) hva = kmap_atomic(page); @@ -2133,6 +2138,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, map->hva = hva; map->pfn = pfn; map->gfn = gfn; + map->protected = protected; return 0; } @@ -2163,7 +2169,9 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, if (!map->hva) return; - if (map->page != KVM_UNMAPPED_PAGE) { + if (map->protected) { + iounmap(map->hva); + } else if (map->page != KVM_UNMAPPED_PAGE) { if (atomic) kunmap_atomic(map->hva); else From patchwork Fri May 22 12:52:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11565595 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F1A790 for ; Fri, 22 May 2020 12:53:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 737D82072C for ; Fri, 22 May 2020 12:53:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="qzMU1Yyk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730159AbgEVMxU (ORCPT ); Fri, 22 May 2020 08:53:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729923AbgEVMw0 (ORCPT ); Fri, 22 May 2020 08:52:26 -0400 Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 335FAC08C5C4 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) Received: by mail-lj1-x241.google.com with SMTP id o14so12528700ljp.4 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pRaycCbZ+3CnIYdCK6nzb5QeCq4+CXz15hhrfgpmlnU=; b=qzMU1YykXzSpxfq4gUGJDa7r6eExwnPKDaXlBK/O4gsy+HAeO9hWikW7PXlS15tSOH 7H2ujvQyQ05kpn3qDXjrynE8jghjk+tV+50UPlqbQIX2du+hpaNowisoN3ub3qLjZ5f3 99uEnzvP+pKXBJgAF83Y0N6c9SKtYKnnrIwDVaq4Kv2Wr4a1zhoeEi0d4NhcPOVBti0r vHJJz6mQyDZ0DqRM/9/4Yxr3+0jwg6iVABSBDnyxepF8R+wLTQn/tl3MyqWiC7CeSR0U kzl1lMGgQZQwNGnw8k9TRIqO4lZbttrvG76y6b1LYb/UbTJV1kAVw7lgseAYLJzdrIFo mGfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pRaycCbZ+3CnIYdCK6nzb5QeCq4+CXz15hhrfgpmlnU=; b=RVGrlZU0aGzDwyx+D98a5UqzIFXMVFPcgjszdCzTwgfCtcuUCgUtFSepzUa2llh8tT 2bWLHzerT2Xao8gmaOemgDziLl8wc5cQvkhSeg5KUyq6iFlD4Ih+etJh/Dkn4ef5S1de AZZ0IKMsSn2FpECX0izlDbLdbq/QdDY5CTqxTUzLmXY3YlrMyXg14/mW5lr/8e+BI8EJ 5j0hd1CGJwpF+FiKLY/1q1E8G5X/88JkMp1eDIann9vGqoRcRiSC7tkp/+cfD8wvMFLh Oirz5yDxpKOyCOnbhCQv4grGVZ3SV99tcbdbJcQTHAeDbesssIJmdhxxlKax1RB36EWt At2w== X-Gm-Message-State: AOAM5330C5nJj0EZQOMEZQoWfV7eflT305sQlZ1ep63La3X3q7GKiGCW W0xR2jmwNMMwi8b1jwPfTfXwZA== X-Google-Smtp-Source: ABdhPJw8ESx6i7S5LKvY5bQPxCQloW++OXZLrfBoGefxWzX7bSl6+c0AL/x/tGcrHssIDz/lu11VOg== X-Received: by 2002:a2e:5d1:: with SMTP id 200mr7029616ljf.157.1590151944653; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id p23sm1665017ljh.117.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 21A53102061; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 16/16] KVM: Unmap protected pages from direct mapping Date: Fri, 22 May 2020 15:52:14 +0300 Message-Id: <20200522125214.31348-17-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the protected memory feature enabled, unmap guest memory from kernel's direct mappings. Migration and KSM is disabled for protected memory as it would require a special treatment. Signed-off-by: Kirill A. Shutemov --- arch/x86/mm/pat/set_memory.c | 1 + include/linux/kvm_host.h | 3 ++ mm/huge_memory.c | 9 +++++ mm/ksm.c | 3 ++ mm/memory.c | 13 +++++++ mm/rmap.c | 4 ++ virt/kvm/kvm_main.c | 74 ++++++++++++++++++++++++++++++++++++ 7 files changed, 107 insertions(+) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6f075766bb94..13988413af40 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2227,6 +2227,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) arch_flush_lazy_mmu_mode(); } +EXPORT_SYMBOL_GPL(__kernel_map_pages); #ifdef CONFIG_HIBERNATION bool kernel_page_present(struct page *page) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b6944f88033d..e1d7762b615c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -705,6 +705,9 @@ int kvm_protect_all_memory(struct kvm *kvm); int kvm_protect_memory(struct kvm *kvm, unsigned long gfn, unsigned long npages, bool protect); +void kvm_map_page(struct page *page, int nr_pages); +void kvm_unmap_page(struct page *page, int nr_pages); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c3562648a4ef..d8a444a401cc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -650,6 +651,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); count_memcg_events(memcg, THP_FAULT_ALLOC, 1); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_unmap_page(page, HPAGE_PMD_NR); } return 0; @@ -1886,6 +1891,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); + + /* Map the page back to the direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_map_page(page, HPAGE_PMD_NR); } else if (thp_migration_supported()) { swp_entry_t entry; diff --git a/mm/ksm.c b/mm/ksm.c index 281c00129a2e..942b88782ac2 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -527,6 +527,9 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, return NULL; if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) return NULL; + /* TODO */ + if (vma_is_kvm_protected(vma)) + return NULL; return vma; } diff --git a/mm/memory.c b/mm/memory.c index d7228db6e4bf..74773229b854 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,7 @@ #include #include #include +#include #include @@ -1088,6 +1089,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); } + + /* Map the page back to the direct mapping */ + if (vma_is_anonymous(vma) && vma_is_kvm_protected(vma)) + kvm_map_page(page, 1); + rss[mm_counter(page)]--; page_remove_rmap(page, false); if (unlikely(page_mapcount(page) < 0)) @@ -3312,6 +3318,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) struct page *page; vm_fault_t ret = 0; pte_t entry; + bool set = false; /* File mapping without ->vm_ops ? */ if (vma->vm_flags & VM_SHARED) @@ -3397,6 +3404,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); + set = true; setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3404,6 +3412,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) update_mmu_cache(vma, vmf->address, vmf->pte); unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma) && set) + kvm_unmap_page(page, 1); + return ret; release: mem_cgroup_cancel_charge(page, memcg, false); diff --git a/mm/rmap.c b/mm/rmap.c index f79a206b271a..a9b2e347d1ab 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1709,6 +1709,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg) { + /* TODO */ + if (vma_is_kvm_protected(vma)) + return true; + return vma_is_temporary_stack(vma); } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 71aac117357f..defc33d3a124 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -2718,6 +2719,72 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +void kvm_map_page(struct page *page, int nr_pages) +{ + int i; + + /* Clear page before returning it to the direct mapping */ + for (i = 0; i < nr_pages; i++) { + void *p = map_page_atomic(page + i); + memset(p, 0, PAGE_SIZE); + unmap_page_atomic(p); + } + + kernel_map_pages(page, nr_pages, 1); +} +EXPORT_SYMBOL_GPL(kvm_map_page); + +void kvm_unmap_page(struct page *page, int nr_pages) +{ + kernel_map_pages(page, nr_pages, 0); +} +EXPORT_SYMBOL_GPL(kvm_unmap_page); + +static int adjust_direct_mapping_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct mm_walk *walk) +{ + bool protect = (bool)walk->private; + pte_t *pte; + struct page *page; + + if (pmd_trans_huge(*pmd)) { + page = pmd_page(*pmd); + if (is_huge_zero_page(page)) + return 0; + VM_BUG_ON_PAGE(total_mapcount(page) != 1, page); + /* XXX: Would it fail with direct device assignment? */ + VM_BUG_ON_PAGE(page_count(page) != 1, page); + kernel_map_pages(page, HPAGE_PMD_NR, !protect); + return 0; + } + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) { + pte_t entry = *pte; + + if (!pte_present(entry)) + continue; + + if (is_zero_pfn(pte_pfn(entry))) + continue; + + page = pte_page(entry); + + VM_BUG_ON_PAGE(page_mapcount(page) != 1, page); + /* XXX: Would it fail with direct device assignment? */ + VM_BUG_ON_PAGE(page_count(page) != + total_mapcount(compound_head(page)), page); + kernel_map_pages(page, 1, !protect); + } + + return 0; +} + +static const struct mm_walk_ops adjust_direct_mapping_ops = { + .pmd_entry = adjust_direct_mapping_pte_range, +}; + static int protect_memory(unsigned long start, unsigned long end, bool protect) { struct mm_struct *mm = current->mm; @@ -2763,6 +2830,13 @@ static int protect_memory(unsigned long start, unsigned long end, bool protect) if (ret) goto out; + if (vma_is_anonymous(vma)) { + ret = walk_page_range_novma(mm, start, tmp, + &adjust_direct_mapping_ops, NULL, + (void *) protect); + if (ret) + goto out; + } next: start = tmp; if (start < prev->vm_end)