From patchwork Fri Mar 4 19:48:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 12769981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07303C433EF for ; Fri, 4 Mar 2022 20:09:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230413AbiCDUK1 (ORCPT ); Fri, 4 Mar 2022 15:10:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230455AbiCDUHg (ORCPT ); Fri, 4 Mar 2022 15:07:36 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E89D0276EE9; Fri, 4 Mar 2022 12:02:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646424128; x=1677960128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=weMq1RBJftqw3sN5QIGp5D566XlIfzjmj/MapVGYUMY=; b=AXL0tfbN5/9YdUJf7zMql5wmzeIC/vus7yz4Pu02iHpCxRt5P0C4Makw agL0zU/c+e29gee+LobzbcifApDZHUkpX9JfE8F6udArIX7Sgn9Qr4oYh MqKbgRg8y8QWOvIcCFfsADwmQ2qllmpf6wm//xM+AoUpPR/UCGZHuW0Zm k1whmRJo09ThwIsDYNYoYxIwqc1X9OaTi+yOaCM+J+1bECJCh8k/Ah572 M0i3vbv9jFLXL0lc9wCidgYYCAqFMQsscpWGYbjL1ioUocyWB/hTC3m5I fcw34zxrQGGu/+eo7SuQRx5Ea1JnVc7IuGc+YEoVCQNGXpRjY60+HV0Q3 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="253983453" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="253983453" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:19 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344300" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 035/104] KVM: x86/mmu: Disallow dirty logging for x86 TDX Date: Fri, 4 Mar 2022 11:48:51 -0800 Message-Id: <7fe3a6d75d0ad6469c97e2edf34a1886ff7be7be.1646422845.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson TDX doesn't support dirty logging. Report dirty logging isn't supported so that device model, for example qemu, can properly handle it. Signed-off-by: Sean Christopherson Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/x86.c | 5 +++++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 15 ++++++++++++--- 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c52a052e208c..da411bcd8cbc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12876,6 +12876,11 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +bool kvm_arch_dirty_log_supported(struct kvm *kvm) +{ + return kvm->arch.vm_type != KVM_X86_TDX_VM; +} + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a56044a31bc6..86f984e0c93f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1423,6 +1423,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_dirty_log_supported(struct kvm *kvm); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3adee9c6b370..ae3bf553f215 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1423,9 +1423,18 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +bool __weak kvm_arch_dirty_log_supported(struct kvm *kvm) { - u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; + return true; +} + +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem) +{ + u32 valid_flags = 0; + + if (kvm_arch_dirty_log_supported(kvm)) + valid_flags |= KVM_MEM_LOG_DIRTY_PAGES; #ifdef __KVM_HAVE_READONLY_MEM valid_flags |= KVM_MEM_READONLY; @@ -1826,7 +1835,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; - r = check_memory_region_flags(mem); + r = check_memory_region_flags(kvm, mem); if (r) return r;