From patchwork Wed Nov 17 15:38:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12692937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D2EC433EF for ; Wed, 17 Nov 2021 15:40:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3C74B6140A for ; Wed, 17 Nov 2021 15:40:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3C74B6140A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2bswPB2pZo83MwCURwv3zLhDhrtu1egEI/KO0/xFgRA=; b=UNbz9Yjl8fH4t3 0ad2AezYYkMohsLPQrDBUIAh8xs+y6aKjm5yW419Ey0vzbuNKHXaSZeQvLtbVnIrzi97qUm+TaDIn siUmI7EoIZjeNcWDDzVUwdQt9jW1WtbiECKGgOCw4XfLmJHSITJqcQCCMnbz5gezXjYq4s6oIueV7 dZj3yKtDt6FJXXm9ZYk30wC4OwFHnGIUcBAztc5peAIa38v+YugzWmBd8+IGL6Yrbs3mKGWJYsQkT WxZ7EUWi9W2JDBlKEXG9F3pKlhdm8iNceu1oj6QsJ2p8TzI5JdjkeYT9wzL6CUo1IpgtITsLtep3l xMKZlVQucBWlTU/bV5aQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnN1Z-005QgU-RH; Wed, 17 Nov 2021 15:39:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnMzo-005Px1-Cp for linux-arm-kernel@lists.infradead.org; Wed, 17 Nov 2021 15:37:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE0E2ED1; Wed, 17 Nov 2021 07:37:10 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 806D13F5A1; Wed, 17 Nov 2021 07:37:09 -0800 (PST) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH v5 05/38] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Date: Wed, 17 Nov 2021 15:38:09 +0000 Message-Id: <20211117153842.302159-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211117153842.302159-1-alexandru.elisei@arm.com> References: <20211117153842.302159-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211117_073712_535118_109C1CA8 X-CRM114-Status: GOOD ( 14.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Userspace resets a VCPU that has already run by means of a KVM_ARM_VCPU_INIT ioctl. This is usually done after a VM shutdown and before the same VM is rebooted, and during this interval the VM memory can be modified by userspace (for example, to copy the original guest kernel image). In this situation, KVM unmaps the entire stage 2 to trigger stage 2 faults, which ensures that the guest has the same view of memory as the host's userspace. Unmapping stage 2 is not an option for locked memslots, so instead do the cache maintenance the first time a VCPU is run, similar to what KVM does when a memslot is locked. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/mmu.c | 15 ++++++++++++++- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3b4839b447c4..5f49a27ce289 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -115,7 +115,8 @@ struct kvm_arch_memory_slot { /* kvm->arch.mmu_pending_ops flags */ #define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_MAX_MMU_PENDING_OPS 1 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_MAX_MMU_PENDING_OPS 2 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8e4787019840..188064c5839c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -607,8 +607,16 @@ void stage2_unmap_vm(struct kvm *kvm) spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + if (memslot_is_locked(memslot)) { + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, + &kvm->arch.mmu_pending_ops); + set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, + &kvm->arch.mmu_pending_ops); + continue; + } stage2_unmap_memslot(kvm, memslot); + } spin_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); @@ -1334,6 +1342,11 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) clear_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, &kvm->arch.mmu_pending_ops); } + if (test_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops)) { + icache_inval_all_pou(); + clear_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops); + } + out_unlock: mutex_unlock(&kvm->slots_lock); return;