From patchwork Wed Feb 1 12:52:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87DAEC05027 for ; Wed, 1 Feb 2023 14:02:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lwu/no6P2S3HcEJxHuYT3MceNsfsQ75GL+8LeVZoWno=; b=b8QA86N5r3Nxew tAXSA42KiArVn3HrxsNtsuhFp7ZH+fpJcJZwEmdvMDiMv5aTrkInRqE6Qtaip7uyH3kPIdA5NZVcv amgvZg4jfVcW5pwj+PSaU/TtQoA8UadARlTA9GYwxX4cswJiNyfCiatfomnEEsvHF/ZNfb4LtjuwQ FFm0zdCGcYutsvtJKxfDagssNFp/gL0V4f1bHQ5y7EK/XkP0A1iFPOqQBfJSVMxy/hcysrXHdE7vn Qbz+7juLeZ2qWdsAMirVQPpmbSPAAm0qwwiSAqCaJQDxqAjAlKlsdddMizgHLGCS3xKZvdIadj/N+ GsDEXtVXjVeNw5DeD77w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDfz-00CAzN-TR; Wed, 01 Feb 2023 14:01:28 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi1-00BnFL-9v for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:31 +0000 Received: by mail-wr1-x42b.google.com with SMTP id m7so17222403wru.8 for ; Wed, 01 Feb 2023 04:59:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XRJgdMdcBYAn8GCsksV5+iIqwprfPIABeIWgyP5LWbc=; b=vxx76hjuHBg6kz3QouqdplT80m0HuVrQQ2SXEjueNrrxS6l2LYfPFH4yDLRiiLMsDG jpCcELs8krbnD4ePXxYpMlLiHUykuqQbDq4iE/TuEGr3napuMp2zDVjoaaTp/socubr0 VOgzRpYubOM2SvtRmhnJ68dv1kOxJqbjNm1uDCQ5fJBR0/St7/iyvJhb1VGsOqCuAJjh KpgR2bOfqz7NNa9Ps68DJJUPhDsOHxD0mYLB50+wwhtIKhV433fD6E1DyEp2Teo6wfBM TRxtIwFFT8ItQB48VmGLHs6vKthvsZR6gddoMK3xQ2ilKufxjFhTL9RWCXlD594pQEob +vMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XRJgdMdcBYAn8GCsksV5+iIqwprfPIABeIWgyP5LWbc=; b=yyFh5mHnUSi6hvQLu5gFnyQBBLMLOo+66Kb+L1/f1wv8vrnEELPYn62oZcQAKc4wUl ucTLOQNZNxye78EVfhWbh1XpllZ6tO+xAP7h7IBQdP6qYXzPp2n+TnWgMm6q6NFB//Jq pmrp1Urnpsk8BgAukzaIcqTYaxH4wWWyGy9Z2iB7k982n4sIhDQZutU1b2MUqdnefsiD 4Hz9f3KqQtJ/2NvnlbHMpyIYoOkgBQPn2LmtsiQz+c+uSTCDF8cnksi6fXsgT1abgt54 6ry2EkvrxkGeKEZFkFpCzMGP7P3vA4/xJewk3WN65A4vMAFsogfG9xR3G9huVPBsMf3v vZEA== X-Gm-Message-State: AO0yUKUMHT1KZi8XVdO513RMrTRqE6T3AZF88iLdfRyoFtXQAWM3RaTn kJ6TdEZkZ2fTCZRJYfYakSKVvw== X-Google-Smtp-Source: AK7set/hW5ZAcObPErBUJzQi2gCLjPyNt9O9D278zKuPYtqDAlofquElbIfsEMoFml8u8ROlxfgf1Q== X-Received: by 2002:a5d:684d:0:b0:2c3:293:3c64 with SMTP id o13-20020a5d684d000000b002c302933c64mr2106297wrw.71.1675256368822; Wed, 01 Feb 2023 04:59:28 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:28 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 12/45] KVM: arm64: pkvm: Unify pkvm_pkvm_teardown_donated_memory() Date: Wed, 1 Feb 2023 12:52:56 +0000 Message-Id: <20230201125328.2186498-13-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045929_389621_C13F2350 X-CRM114-Status: GOOD ( 13.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Tearing down donated memory requires clearing the memory, pushing the pages into the reclaim memcache, and moving the mapping into the host stage-2. Keep these operations in a single function. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +- arch/arm64/kvm/hyp/nvhe/pkvm.c | 50 +++++++------------ 3 files changed, 22 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index d4f4ffbb7dbb..021825aee854 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -86,6 +86,8 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc); void *pkvm_map_donated_memory(unsigned long host_va, size_t size); void pkvm_unmap_donated_memory(void *va, size_t size); +void pkvm_teardown_donated_memory(struct kvm_hyp_memcache *mc, void *addr, + size_t dirty_size); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 410361f41e38..cad5736026d5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -314,8 +314,7 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); - push_hyp_memcache(mc, addr, hyp_virt_to_phys); - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); + pkvm_teardown_donated_memory(mc, addr, 0); addr = hyp_alloc_pages(&vm->pool, 0); } } diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index a3711979bbd3..c51a8a592849 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -602,27 +602,28 @@ void *pkvm_map_donated_memory(unsigned long host_va, size_t size) return va; } -static void __unmap_donated_memory(void *va, size_t size) +void pkvm_teardown_donated_memory(struct kvm_hyp_memcache *mc, void *va, + size_t dirty_size) { - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), - PAGE_ALIGN(size) >> PAGE_SHIFT)); -} + size_t size = max(PAGE_ALIGN(dirty_size), PAGE_SIZE); -void pkvm_unmap_donated_memory(void *va, size_t size) -{ if (!va) return; - memset(va, 0, size); - __unmap_donated_memory(va, size); + memset(va, 0, dirty_size); + + if (mc) { + for (void *start = va; start < va + size; start += PAGE_SIZE) + push_hyp_memcache(mc, start, hyp_virt_to_phys); + } + + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), + size >> PAGE_SHIFT)); } -static void unmap_donated_memory_noclear(void *va, size_t size) +void pkvm_unmap_donated_memory(void *va, size_t size) { - if (!va) - return; - - __unmap_donated_memory(va, size); + pkvm_teardown_donated_memory(NULL, va, size); } /* @@ -759,18 +760,6 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, return ret; } -static void -teardown_donated_memory(struct kvm_hyp_memcache *mc, void *addr, size_t size) -{ - size = PAGE_ALIGN(size); - memset(addr, 0, size); - - for (void *start = addr; start < addr + size; start += PAGE_SIZE) - push_hyp_memcache(mc, start, hyp_virt_to_phys); - - unmap_donated_memory_noclear(addr, size); -} - int __pkvm_teardown_vm(pkvm_handle_t handle) { size_t vm_size, last_ran_size; @@ -813,19 +802,18 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; while (vcpu_mc->nr_pages) { addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); - push_hyp_memcache(mc, addr, hyp_virt_to_phys); - unmap_donated_memory_noclear(addr, PAGE_SIZE); + pkvm_teardown_donated_memory(mc, addr, 0); } - teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); + pkvm_teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } last_ran_size = pkvm_get_last_ran_size(); - teardown_donated_memory(mc, hyp_vm->kvm.arch.mmu.last_vcpu_ran, - last_ran_size); + pkvm_teardown_donated_memory(mc, hyp_vm->kvm.arch.mmu.last_vcpu_ran, + last_ran_size); vm_size = pkvm_get_hyp_vm_size(hyp_vm->kvm.created_vcpus); - teardown_donated_memory(mc, hyp_vm, vm_size); + pkvm_teardown_donated_memory(mc, hyp_vm, vm_size); hyp_unpin_shared_mem(host_kvm, host_kvm + 1); return 0;