From patchwork Fri Oct 27 18:22:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13438947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2750EC25B47 for ; Fri, 27 Oct 2023 18:23:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92F6E8001C; Fri, 27 Oct 2023 14:23:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B7EA80018; Fri, 27 Oct 2023 14:23:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BC4C8001C; Fri, 27 Oct 2023 14:23:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 51F3380018 for ; Fri, 27 Oct 2023 14:23:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2BE95A073F for ; Fri, 27 Oct 2023 18:23:08 +0000 (UTC) X-FDA: 81392063256.29.2122E2E Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf16.hostedemail.com (Postfix) with ESMTP id 504E4180005 for ; Fri, 27 Oct 2023 18:23:06 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=DSat3St6; spf=pass (imf16.hostedemail.com: domain of 3CQA8ZQYKCCoYKGTPIMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--seanjc.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3CQA8ZQYKCCoYKGTPIMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698430986; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xj30xbFl07atOtCQSMtQg8dB6rp8Wt3n2Y10jzFlxa0=; b=nD+xKodkf/1beSgURbygpt8ib/ndAhrIPBO36WsuDqJc+aCM2DAL/RpdrD1+9S819egCr4 4oNiuy887o+KeXbXK44MW4W7rRio7FnV8+2SCH0F3Rgq7pzXZNduBgYm4k9o+UQ2R0OEWa C51E47r4TE2y+ljJcQsiP7Ceb9Ok/Zo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698430986; a=rsa-sha256; cv=none; b=7lID/HUualetf7GHpTgPbzEahWdBzQRuT5vxByG96RD0M8tdXqKOIwQ82rEAICcVWhLJtc wzuNTG1ExnuBllGzAubApQG9+5/2Zzx5lubQNoONHO5ZDGxWJ0AtRqhar8JkIDJkWy4mqA he29ohLjT7432xI63kYecfBzJyLDtlw= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=DSat3St6; spf=pass (imf16.hostedemail.com: domain of 3CQA8ZQYKCCoYKGTPIMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--seanjc.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3CQA8ZQYKCCoYKGTPIMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-da1aa98ec19so660386276.2 for ; Fri, 27 Oct 2023 11:23:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698430985; x=1699035785; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Xj30xbFl07atOtCQSMtQg8dB6rp8Wt3n2Y10jzFlxa0=; b=DSat3St6quZB3GtrKG/ef/BfRoXyRWhOF1EvPhjVi5IOAtn6t/Ey4EO9Kp//st7Mxn CJDn9totVVGzZyjNAcOMAsqvXTbPexUIMa+2Igq9MbVcwd7aS4YSw6ot5jsRc1LeTVbE Y75fG9/GVfinNUleZQcxrovuTiuhmDgcMtZrRgwUxGxHrYnRVigeLGFhB6Tv71tTNmC0 T3yTIFwr8ARSSGyDdvHaKg4YG5Q+qnAKQxeH8Jq/54QPGuF1nbIbyjHsKJCfn+2eIF7/ BBd/5WW6n3hl1JB8WVnWNVcWU0Mt4IrjfsQ3IXWxJ6l9MYOW7MN7nk8MRpeneUtkZ+JB Nx7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698430985; x=1699035785; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Xj30xbFl07atOtCQSMtQg8dB6rp8Wt3n2Y10jzFlxa0=; b=PVl4I9NFFulU/tDqfYyEabR8L2+YmVqGqnl0Ogp7+Oarn7QQaV7Xwl3LFI2pzOfPLQ NuzMlQ7VVzEW6SqYbgwRwt7umK1G1zv3G8HIC6FwfD0zGa7N1L8hZjVbsArb3A9Mn8SU pW52kGbue0tzOVE33k87An8w44V7J70NZ8n2fukbI4EXbcRzgVSh5Xp+WiGYP2pmftxP RSIMMOG3SD8+nOV6GdRXWiOyIaHDvl/ph6BpghnjQIQ6WBZpsrTzxuFDEd6+bPLuMQ7F 7WVu5HWl+xKOUmOMd5crjRdhgDjhDpfQUT4GQmIvwhJhvJ1ciUfZERGqsC63mfLPhBMz jpgw== X-Gm-Message-State: AOJu0Yxl7diERUYcp+kqyoXapEPVQQF64uYo86yYZ0m6ysi14VZaOqKs LlIeZqktUxLqCICYQUQduttTXAdRZcs= X-Google-Smtp-Source: AGHT+IHJyYgXOSmXHw6Q/ZGgKMIZrlqldOTH4kOV2g/1tUJbmTSTwCBf3zyHkDcXb7wW1+WsIfgoTREwpdY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:5f4c:0:b0:da0:3e20:658d with SMTP id h12-20020a255f4c000000b00da03e20658dmr63345ybm.10.1698430985424; Fri, 27 Oct 2023 11:23:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 27 Oct 2023 11:22:02 -0700 In-Reply-To: <20231027182217.3615211-1-seanjc@google.com> Mime-Version: 1.0 References: <20231027182217.3615211-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231027182217.3615211-21-seanjc@google.com> Subject: [PATCH v13 20/35] KVM: x86/mmu: Handle page fault for private memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , " =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?q?=C3=BCn?= " , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspamd-Queue-Id: 504E4180005 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zdndj4daspsawjitysjuo7zojjdnh7pu X-HE-Tag: 1698430986-140595 X-HE-Meta: U2FsdGVkX1/HXERDyam2fQRdrXmVtfwYORYRnNddvKv8bjPzTN+AsUBqhqqQ2Dx/FZ5XwVGv6V/1CS0BhWupkwMUhUN+CkT1RLn6SisaH6Myjy+WJPhLL4EY7cDcjhp/JTg2tRuRiemi9QT2mnTVTpl66mg2db/BpgO7GTOsCgUkjewQNUkdvhIY7yCtHpQWUzCe1WVAA/3qqSbJ4Sf1Tp/jrT+adqdu/S2nWWy/AUFWl7kKEo3coHxJlLXuWiZ0CMeG5UR6/ZE/kfs8liP0XfUbvBgKXZLoa/pkHW8KdLL1ubmdq2MIwhBDaqNSm0PQzRw6rbHUlKyQGxHn8XOPpeGxGZrw+eRTODGnghO+EaB1eWZBjalqHaGZDr5FTnvOAeAwfowYg/MeEjPV5MHQUbugLKkjY0zDy1KI5n4PX9vuDsxMn4QBAVUfgdP9IqtmEbp1ah2pCf1hIP69+cJ7n58Ye3J7BXzsy4ca8tkwGULRrxdiLF8l4I+INGbS4+wNiC2aXUWnR6FvTDLFTNhVWF1fS5KUT2F9wVHi+WO8t7DQTQXuAy4gE/Fwm8HYfsMZ/ietTavrmNxjVSwod7s+umPtrOvOJcZA6nTRBfArri4PjQS+qma+tlLXE50hSV5ynnMfxJnv9JP99bNWhQJkqtYKZnhSZulqztLeRPlJDH5JCj5ghb/e3F1iVk/GdtQrVjIcQr0sqB+WuGpEZWv2s+tkZyHmRB7qsueW9nStH9axLE9Df7Mm5iuYcqakidoh6UKxmahUd6vpuAq8p0zSgQkDibmznateEv8M5mDnoxXx/79hyXKRv3tQLlsdiptip5BjZdbl3hbIXnrWXxA1cgyBqzIzUmmeulTRuA8bjhtlPUwCf9gmVl3VRIXhSdEPLHrruUVjXNQv0MTSOW92WRnqo2UwEhEjixNcZjwsP/aWCORnP7wWCKVw5c1Qxzp5lmujOfwKxjPjVoJsxiW 4DobCq8q ot5U9C1lzMJXaO/+J4OUQm+z7bSFD+HeJOgrirBZVzTHwpz1eLKmjZcM2oHpCIMTVoF3wSvpTzmA1QSftUWbiXK9GoXKq+OurcvQPMOpuA8OwvRoAVzEEWP7G/zY67sa5FC3XZ2Lv94Whm4I5G0N/9QELfjsOfmTA32pm3JpmzO1BgEjET4kMX/EbhfZBCHcOGNhUdf6eH4RBE2PB1SUqyLyFvXPlIg/OXy/kisn3E0gViw1jJgh2RO2pH5fJxkOtduiovJZ5r6BzxkokEpa7nXsYNX2uLNXcOa1zbmzrpBF1IRunNPyStl94Gusn0mRhvY17E9QSf2ECN/Pyla4kvCQDACvCnS8A9gHtVEEZIGmtnUGNLT/w1V3X4bRH1BZtYCRMDdSlb93DQpQrxy3xWyKd/fcsJslxDjIFeH6uGFV6w7nVldk+00z+pYSrbyu3vfu1XlLXWXSYxZsRkGcBkX+h7aENLN0ggplgH3eSD/9Uz5yGpKlsJ9P0C7d+qZuC0rdrMQK3tkpVOn+YzYD+Sd/tleiLgonVPBj8bh2FybQAHmphQdz2Se2BdN1M6DNuTmsFa2mX1fXoG6BbgKVyscOaDR+yx2RHiCGOWHLF1KhJPpA4uJ/W5qzON1rwTKCZZzRodm0M7neao9S0EIUzqX3sp0i9GneQlKE8+FzGZSJfbqqKus6jsFO+TBJ643dMlLv1ImHrkbtURxQvN5CR/JkiEKnYJaz7JaIFlDgb10TgsJZvgBUI0XorLkYIshzIo2jjkldCH2LIFvKfsphM3V5hLtU05/CBtQuL7mM6nOKLPCyVhwX2j1xozOD1PTa7yObd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chao Peng Add support for resolving page faults on guest private memory for VMs that differentiate between "shared" and "private" memory. For such VMs, KVM_MEM_PRIVATE memslots can include both fd-based private memory and hva-based shared memory, and KVM needs to map in the "correct" variant, i.e. KVM needs to map the gfn shared/private as appropriate based on the current state of the gfn's KVM_MEMORY_ATTRIBUTE_PRIVATE flag. For AMD's SEV-SNP and Intel's TDX, the guest effectively gets to request shared vs. private via a bit in the guest page tables, i.e. what the guest wants may conflict with the current memory attributes. To support such "implicit" conversion requests, exit to user with KVM_EXIT_MEMORY_FAULT to forward the request to userspace. Add a new flag for memory faults, KVM_MEMORY_EXIT_FLAG_PRIVATE, to communicate whether the guest wants to map memory as shared vs. private. Like KVM_MEMORY_ATTRIBUTE_PRIVATE, use bit 3 for flagging private memory so that KVM can use bits 0-2 for capturing RWX behavior if/when userspace needs such information, e.g. a likely user of KVM_EXIT_MEMORY_FAULT is to exit on missing mappings when handling guest page fault VM-Exits. In that case, userspace will want to know RWX information in order to correctly/precisely resolve the fault. Note, private memory *must* be backed by guest_memfd, i.e. shared mappings always come from the host userspace page tables, and private mappings always come from a guest_memfd instance. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 8 ++- arch/x86/kvm/mmu/mmu.c | 101 ++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 1 + include/linux/kvm_host.h | 8 ++- include/uapi/linux/kvm.h | 1 + 5 files changed, 110 insertions(+), 9 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 7f00c310c24a..38dc1fda4f45 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6837,6 +6837,7 @@ spec refer, https://github.com/riscv/riscv-sbi-doc. /* KVM_EXIT_MEMORY_FAULT */ struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) __u64 flags; __u64 gpa; __u64 size; @@ -6845,8 +6846,11 @@ spec refer, https://github.com/riscv/riscv-sbi-doc. KVM_EXIT_MEMORY_FAULT indicates the vCPU has encountered a memory fault that could not be resolved by KVM. The 'gpa' and 'size' (in bytes) describe the guest physical address range [gpa, gpa + size) of the fault. The 'flags' field -describes properties of the faulting access that are likely pertinent. -Currently, no flags are defined. +describes properties of the faulting access that are likely pertinent: + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - When set, indicates the memory fault occurred + on a private memory access. When clear, indicates the fault occurred on a + shared access. Note! KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it accompanies a return code of '-1', not '0'! errno will always be set to EFAULT diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4167d557c577..c4e758f0aebb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3147,9 +3147,9 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, return level; } -int kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) +static int __kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level, bool is_private) { struct kvm_lpage_info *linfo; int host_level; @@ -3161,6 +3161,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } + if (is_private) + return max_level; + if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -3168,6 +3171,16 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return min(host_level, max_level); } +int kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, gfn_t gfn, + int max_level) +{ + bool is_private = kvm_slot_can_be_private(slot) && + kvm_mem_is_private(kvm, gfn); + + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -3188,8 +3201,9 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level, + fault->is_private); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4261,6 +4275,55 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL); } +static inline u8 kvm_max_level_for_order(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT, + PAGE_SIZE, fault->write, fault->exec, + fault->is_private); +} + +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int max_order, r; + + if (!kvm_slot_can_be_private(fault->slot)) { + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + return -EFAULT; + } + + r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, + &max_order); + if (r) { + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + return r; + } + + fault->max_level = min(kvm_max_level_for_order(max_order), + fault->max_level); + fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); + + return RET_PF_CONTINUE; +} + static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4293,6 +4356,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_EMULATE; } + if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + return -EFAULT; + } + + if (fault->is_private) + return kvm_faultin_pfn_private(vcpu, fault); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, fault->write, &fault->map_writable, @@ -7173,6 +7244,26 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + /* + * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only + * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM + * can simply ignore such slots. But if userspace is making memory + * PRIVATE, then KVM must prevent the guest from accessing the memory + * as shared. And if userspace is making memory SHARED and this point + * is reached, then at least one page within the range was previously + * PRIVATE, i.e. the slot's possible hugepage ranges are changing. + * Zapping SPTEs in this case ensures KVM will reassess whether or not + * a hugepage can be used for affected ranges. + */ + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + return false; + + return kvm_unmap_gfn_range(kvm, range); +} + static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index decc1f153669..86c7cb692786 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -201,6 +201,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; /* diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7de93858054d..e3223cafd7db 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2358,14 +2358,18 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) #define KVM_DIRTY_RING_MAX_ENTRIES 65536 static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, - gpa_t gpa, gpa_t size) + gpa_t gpa, gpa_t size, + bool is_write, bool is_exec, + bool is_private) { vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; vcpu->run->memory_fault.gpa = gpa; vcpu->run->memory_fault.size = size; - /* Flags are not (yet) defined or communicated to userspace. */ + /* RWX flags are not (yet) defined or communicated to userspace. */ vcpu->run->memory_fault.flags = 0; + if (is_private) + vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE; } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 33d542de0a61..29e9eb51dec9 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -527,6 +527,7 @@ struct kvm_run { } notify; /* KVM_EXIT_MEMORY_FAULT */ struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) __u64 flags; __u64 gpa; __u64 size;