From patchwork Tue Jul 9 13:20:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13727952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25F0C2BD09 for ; Tue, 9 Jul 2024 13:21:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B9F26B00AE; Tue, 9 Jul 2024 09:21:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8429B6B00B0; Tue, 9 Jul 2024 09:21:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 695EB6B00B1; Tue, 9 Jul 2024 09:21:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 46DF96B00AE for ; Tue, 9 Jul 2024 09:21:17 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9DACD161903 for ; Tue, 9 Jul 2024 13:21:16 +0000 (UTC) X-FDA: 82320275352.19.AD40811 Received: from smtp-fw-52004.amazon.com (smtp-fw-52004.amazon.com [52.119.213.154]) by imf06.hostedemail.com (Postfix) with ESMTP id 741A018001B for ; Tue, 9 Jul 2024 13:21:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=amazon.co.uk header.s=amazon201209 header.b=GkdSXk8L; spf=pass (imf06.hostedemail.com: domain of "prvs=913fd7204=roypat@amazon.co.uk" designates 52.119.213.154 as permitted sender) smtp.mailfrom="prvs=913fd7204=roypat@amazon.co.uk"; dmarc=pass (policy=quarantine) header.from=amazon.co.uk ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531258; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OgAfYDQpLUE9XUDFTSU5X0VWjGezZLvoUAAPFhPmJtQ=; b=Bg9IMbVOKzCEdRCLej4bV7jVSPwWguzELa1jDcR1L+HUS3eA/71PkoWbnL5bvs0NfhWT5T Z1vhRD4mixLy4azR7mTi19LHRBywb/W6eXV/oSeJylx2861Ehicr3zs/4Tm9HvcNm7SbWK yueZTSGJC/ba/EXZyHD2qBr4A3Cw4WE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=amazon.co.uk header.s=amazon201209 header.b=GkdSXk8L; spf=pass (imf06.hostedemail.com: domain of "prvs=913fd7204=roypat@amazon.co.uk" designates 52.119.213.154 as permitted sender) smtp.mailfrom="prvs=913fd7204=roypat@amazon.co.uk"; dmarc=pass (policy=quarantine) header.from=amazon.co.uk ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531258; a=rsa-sha256; cv=none; b=A71LNDEztg1GC03ApIFnmq2kaa5aZb6lPYO84aL4ztqYBrN6nFtmHP+IrVUVFSFsyDBl5N 8LKLVpyK76h6JZusKKMoQ01sEurwoDg7UqVPJpKwCk14Sr5KHX3VZ8p022+z5T/ue1dAF6 T/QIXPu49mXMqmbd5Yw4oC6cWyojaU4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1720531274; x=1752067274; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OgAfYDQpLUE9XUDFTSU5X0VWjGezZLvoUAAPFhPmJtQ=; b=GkdSXk8LlIwg1Kmq6Vi7DvXx54EzN/bGMH5q3nn6Oqv/u4zENz8XHdhi sApr6OVPdYiXVqGaaehc8K+l3vtnQlAj230ZK6PjO5UxhIGm7qWKK83aZ /5fksCl2vPsgQQSbpjYI10J0+/QLeEvaUowQ0Mnn8801tUi21J+Rj8vpF 8=; X-IronPort-AV: E=Sophos;i="6.09,195,1716249600"; d="scan'208";a="217222162" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.2]) by smtp-border-fw-52004.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2024 13:21:14 +0000 Received: from EX19MTAUEC002.ant.amazon.com [10.0.0.204:13938] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.50.89:2525] with esmtp (Farcaster) id 57bc0aa8-df5f-4f40-9cd0-d79eb3ced12e; Tue, 9 Jul 2024 13:21:13 +0000 (UTC) X-Farcaster-Flow-ID: 57bc0aa8-df5f-4f40-9cd0-d79eb3ced12e Received: from EX19D008UEA004.ant.amazon.com (10.252.134.191) by EX19MTAUEC002.ant.amazon.com (10.252.135.253) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 9 Jul 2024 13:21:07 +0000 Received: from EX19MTAUEC001.ant.amazon.com (10.252.135.222) by EX19D008UEA004.ant.amazon.com (10.252.134.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 9 Jul 2024 13:21:07 +0000 Received: from ua2d7e1a6107c5b.ant.amazon.com (172.19.88.180) by mail-relay.amazon.com (10.252.135.200) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Tue, 9 Jul 2024 13:21:04 +0000 From: Patrick Roy To: , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 4/8] kvm: x86: support walking guest page tables in gmem Date: Tue, 9 Jul 2024 14:20:32 +0100 Message-ID: <20240709132041.3625501-5-roypat@amazon.co.uk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240709132041.3625501-1-roypat@amazon.co.uk> References: <20240709132041.3625501-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 741A018001B X-Stat-Signature: uzzk7ns5iwfbjjo43ayi4em9rj9m5zwn X-Rspam-User: X-HE-Tag: 1720531274-63084 X-HE-Meta: U2FsdGVkX19Fu7R4Mu3hsXxjTW/DKDs4QaI7RNsGplbD9ts00lPUb5UuA44SdEGz8tkazMMkJ4aZXosH36yWsqc4ri8/GtCZuDA7zbwfK1gKR7PEi6e/8VnKbl6V2itUZurcDHJ/mkBMIPpRMWjINpX4uwoiOB2L1QDUweFOaIr6JC2UvZUeZuhWOCuLgiN1dh9buHUy9+TuQG7U+nOcR0EcQeS+RDZR5TSQvH72vqO9c9tcdDqsn9ExzvBi2gn42uc/Sx4WSmP302S23U4gqiA1rxSyYOJmv7Ex4CAFwjlO+G3od8C9iIfWU3dB4KBYTVAMyp74jGiwZ9000EKo1CXpMvfkQOCsN6U1eH8H13krXlZO+2fTIhbKFu4jVqtrGA/LEPy1ZJ998AOhJcIvifhS5mrIY9RhIGCx4ZZ9hc8Karsa7a44NC427BSr9GSu3ajBhAon2wTGl3209PHF2bA51il/sIGyh7hruLWezZ6GYpDhpdUvTn79if8+R0q7c/k/5EOkasfnyx4n/39XKB34O00qBhZ/sV1hcsy2wxujTbJLVZSch+cMDxOzOl8z98X5gSqilXMT/3WYxBynm/A6RQbJafzGdxTjzYDoY+RUIRhr/uuBll12xcFHWHlJhdvHIgGLOVctGl+px5cTs31wLnr7O6BG8M29IYD2tog5hFplnZF0OqXN2cAGXt+41+ZF3QhbhPrPoWrHl5IzRCjjHeR/5FGYexvCLBMzdqfAWvvVvN/vZcFPhlewrGk7vT0znbIdJEDcEDdI+POfssYxfR7ZmueRgFV5ddjZyiW2cf+ePP4sqvSyzuJLNsO7U4uLbRStFPqM0cT3wKOiQm+Jn1U/swgBfYIJRYghCEJYw/WT5W4koekFy/ttaC8/DlcAhl/YBYKBRSJs/lx+LDEGxe1v/SwdUQhuraHqjz9DOq7BwE47WO2uur3zcRQBfrcEU8iyWgfIAbDcVPx baZNa3PH YbX6XO+HzyPQhYrGgEpuq8Q3+NZLHK9Z/TSxpGjb8z+4p//B+FzAIXNe67V9YKaBjPZgBuGoE0sqKUmPO0V1qWAFzn4IQYy5FwjcST8ZRlgLByWCdCiSIonQSStaHrU9/hIgb0rKlYQopot0r4PhYIBRUs06+tfSLnHP8lo67p+4xnvoRDWc6ci3/YnudnRFFggsAv+OXSpJkVjDtJmQBU87qdQaUUnXMkxyfh99noFt6Fsft9LdA8qCL3497RsVl77DSzNBy+7m3XzvjBcW3FSCyGGoeAHEBFu6kJhUGt1rUUjIo1KDBqOSfP9yIacuix9oJ5KLLmBf9H6GIM5PQBz+EIlKUO6eIllyKg3iPZbOE9NWGSx8FQgnduA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update the logic in paging_tmpl.h to work with guest_private memory. If KVM cannot access gmem and the guest's page tables are in gfns marked as private, then error out. Let the guest page table walker access gmem by making it use gfn_to_pfn_caches, which are already gmem aware, and will later also handle on-demand mapping of gmem once it supports being removed from the direct map. We re-use the gfn_to_pfn_cache here to avoid implementing yet another remapping solution to support the cmpxchg used to set the "accessed" bit on guest PTEs. The only case that now needs some special handling is page tables in read-only memslots, as gfn_to_pfn_caches cannot be used for readonly memory. In this case, use kvm_vcpu_read_guest (which is also gmem aware), as there is no need to cache the gfn->pfn translation in this case (there is no need to do a cmpxchg on the PTE as the walker does not set the accessed bit for read-only ptes). Signed-off-by: Patrick Roy --- arch/x86/kvm/mmu/paging_tmpl.h | 94 ++++++++++++++++++++++++++++------ 1 file changed, 77 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 69941cebb3a8..ddf3b4bd479e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -84,7 +84,7 @@ struct guest_walker { pt_element_t ptes[PT_MAX_FULL_LEVELS]; pt_element_t prefetch_ptes[PTE_PREFETCH_NUM]; gpa_t pte_gpa[PT_MAX_FULL_LEVELS]; - pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS]; + struct gfn_to_pfn_cache ptep_caches[PT_MAX_FULL_LEVELS]; bool pte_writable[PT_MAX_FULL_LEVELS]; unsigned int pt_access[PT_MAX_FULL_LEVELS]; unsigned int pte_access; @@ -201,7 +201,7 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, { unsigned level, index; pt_element_t pte, orig_pte; - pt_element_t __user *ptep_user; + struct gfn_to_pfn_cache *pte_cache; gfn_t table_gfn; int ret; @@ -210,10 +210,12 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, return 0; for (level = walker->max_level; level >= walker->level; --level) { + unsigned long flags; + pte = orig_pte = walker->ptes[level - 1]; table_gfn = walker->table_gfn[level - 1]; - ptep_user = walker->ptep_user[level - 1]; - index = offset_in_page(ptep_user) / sizeof(pt_element_t); + pte_cache = &walker->ptep_caches[level - 1]; + index = offset_in_page(pte_cache->khva) / sizeof(pt_element_t); if (!(pte & PT_GUEST_ACCESSED_MASK)) { trace_kvm_mmu_set_accessed_bit(table_gfn, index, sizeof(pte)); pte |= PT_GUEST_ACCESSED_MASK; @@ -246,11 +248,26 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, if (unlikely(!walker->pte_writable[level - 1])) continue; - ret = __try_cmpxchg_user(ptep_user, &orig_pte, pte, fault); + read_lock_irqsave(&pte_cache->lock, flags); + while (!kvm_gpc_check(pte_cache, sizeof(pte))) { + read_unlock_irqrestore(&pte_cache->lock, flags); + + ret = kvm_gpc_refresh(pte_cache, sizeof(pte)); + if (ret) + return ret; + + read_lock_irqsave(&pte_cache->lock, flags); + } + ret = __try_cmpxchg((pt_element_t *)pte_cache->khva, &orig_pte, pte, sizeof(pte)); + + if (!ret) + kvm_gpc_mark_dirty_in_slot(pte_cache); + + read_unlock_irqrestore(&pte_cache->lock, flags); + if (ret) return ret; - kvm_vcpu_mark_page_dirty(vcpu, table_gfn); walker->ptes[level - 1] = pte; } return 0; @@ -296,6 +313,12 @@ static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu, return gpte & PT_PAGE_SIZE_MASK; } + +static void FNAME(walk_deactivate_gpcs)(struct guest_walker *walker) { + for (unsigned int level = 0; level < PT_MAX_FULL_LEVELS; ++level) + kvm_gpc_deactivate(&walker->ptep_caches[level]); +} + /* * Fetch a guest pte for a guest virtual address, or for an L2's GPA. */ @@ -305,7 +328,6 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, { int ret; pt_element_t pte; - pt_element_t __user *ptep_user; gfn_t table_gfn; u64 pt_access, pte_access; unsigned index, accessed_dirty, pte_pkey; @@ -320,8 +342,17 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, u16 errcode = 0; gpa_t real_gpa; gfn_t gfn; + struct gfn_to_pfn_cache *pte_cache; trace_kvm_mmu_pagetable_walk(addr, access); + + for (unsigned int level = 0; level < PT_MAX_FULL_LEVELS; ++level) { + pte_cache = &walker->ptep_caches[level]; + + memset(pte_cache, 0, sizeof(*pte_cache)); + kvm_gpc_init(pte_cache, vcpu->kvm); + } + retry_walk: walker->level = mmu->cpu_role.base.level; pte = kvm_mmu_get_guest_pgd(vcpu, mmu); @@ -362,11 +393,13 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, do { struct kvm_memory_slot *slot; - unsigned long host_addr; + unsigned long flags; pt_access = pte_access; --walker->level; + pte_cache = &walker->ptep_caches[walker->level - 1]; + index = PT_INDEX(addr, walker->level); table_gfn = gpte_to_gfn(pte); offset = index * sizeof(pt_element_t); @@ -396,15 +429,36 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, if (!kvm_is_visible_memslot(slot)) goto error; - host_addr = gfn_to_hva_memslot_prot(slot, gpa_to_gfn(real_gpa), - &walker->pte_writable[walker->level - 1]); - if (unlikely(kvm_is_error_hva(host_addr))) - goto error; + /* + * gfn_to_pfn_cache expects the memory to be writable. However, + * if the memory is not writable, we do not need caching in the + * first place, as we only need it to later potentially write + * the access bit (which we cannot do anyway if the memory is + * readonly). + */ + if (slot->flags & KVM_MEM_READONLY) { + if (kvm_vcpu_read_guest(vcpu, real_gpa + offset, &pte, sizeof(pte))) + goto error; + } else { + if (kvm_gpc_activate(pte_cache, real_gpa + offset, + sizeof(pte))) + goto error; - ptep_user = (pt_element_t __user *)((void *)host_addr + offset); - if (unlikely(__get_user(pte, ptep_user))) - goto error; - walker->ptep_user[walker->level - 1] = ptep_user; + read_lock_irqsave(&pte_cache->lock, flags); + while (!kvm_gpc_check(pte_cache, sizeof(pte))) { + read_unlock_irqrestore(&pte_cache->lock, flags); + + if (kvm_gpc_refresh(pte_cache, sizeof(pte))) + goto error; + + read_lock_irqsave(&pte_cache->lock, flags); + } + + pte = *(pt_element_t *)pte_cache->khva; + read_unlock_irqrestore(&pte_cache->lock, flags); + + walker->pte_writable[walker->level - 1] = true; + } trace_kvm_mmu_paging_element(pte, walker->level); @@ -467,13 +521,19 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, addr, write_fault); if (unlikely(ret < 0)) goto error; - else if (ret) + else if (ret) { + FNAME(walk_deactivate_gpcs)(walker); goto retry_walk; + } } + FNAME(walk_deactivate_gpcs)(walker); + return 1; error: + FNAME(walk_deactivate_gpcs)(walker); + errcode |= write_fault | user_fault; if (fetch_fault && (is_efer_nx(mmu) || is_cr4_smep(mmu))) errcode |= PFERR_FETCH_MASK;