From patchwork Fri Oct 7 23:28:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 13001530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8AC7C433FE for ; Fri, 7 Oct 2022 23:32:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Kxchz6zyVRVM0xF7DdDxD+6l3/rK/ZbFrBFmmhDu+mE=; b=EmpF1az+jjdRqR YW14FhUF2YNJdw5HWpGfpSp2+0C/w6+Iddj5gsuadKrCkUrWvOB61XXBPJaZpERWfh9MJhdEZa6P5 e261557IZwgPQ/O9wLjnm0kDMRoo2639UWjdY7uWtyn31zYqjoGKQMc1HBNQIDJ+7tQv1x/K7bcOF BiCTIPe6V/A+Ni44t45IZVV0sRNrEy4CK6p1pHZf106byz5IIL13JYZX5YBduILqONsUBW5QJVAfi sUiue/RufNxhLSFgjZwqw2Dm0KJX37gtPV5hnlOYOMpM8Dt7vb//q0+rM2aQGQOm83fStaV9cW9Cw 9HLW7T3CHITJ/FGcOJRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ogwnx-00AsuG-MW; Fri, 07 Oct 2022 23:30:57 +0000 Received: from out0.migadu.com ([2001:41d0:2:267::]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ogwmO-00As9p-VN for linux-arm-kernel@lists.infradead.org; Fri, 07 Oct 2022 23:29:22 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1665185359; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jR0mg6F8PD1PCyPLvjPpIf4XFeLVY65l9bZKLjv3zhU=; b=HxSTwqdNf+XmR5qSybbUP8Ecxwyc7kP0pm7o5Ek+YMj4MJPcXStljuG4RX1rxw9HlWqgi6 riIlw0EqXa83Dq8JW7YoP9dul5GB4Yb5W4R63kew8lZfuXtG9zUjVpyuEhYMTavMloU26L qTEBMJZlwc7zhGX4j1hovumn4WtPHcg= From: Oliver Upton To: Marc Zyngier , James Morse , Alexandru Elisei Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Reiji Watanabe , Ricardo Koller , David Matlack , Quentin Perret , Ben Gardon , Gavin Shan , Peter Xu , Will Deacon , Sean Christopherson , kvmarm@lists.linux.dev, Oliver Upton Subject: [PATCH v2 08/15] KVM: arm64: Protect stage-2 traversal with RCU Date: Fri, 7 Oct 2022 23:28:11 +0000 Message-Id: <20221007232818.459650-9-oliver.upton@linux.dev> In-Reply-To: <20221007232818.459650-1-oliver.upton@linux.dev> References: <20221007232818.459650-1-oliver.upton@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221007_162921_181223_744B0030 X-CRM114-Status: GOOD ( 12.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The use of RCU is necessary to safely change the stage-2 page tables in parallel. Acquire and release the RCU read lock when traversing the page tables. Use the _raw() flavor of rcu_dereference when changes to the page tables are otherwise protected from parallel software walkers (e.g. holding the write lock). Signed-off-by: Oliver Upton --- arch/arm64/include/asm/kvm_pgtable.h | 34 ++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 7 +++++- 2 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index beb89eac155c..60c37e5e77dd 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -25,6 +25,13 @@ static inline u64 kvm_get_parange(u64 mmfr0) typedef u64 kvm_pte_t; +/* + * RCU cannot be used in a non-kernel context such as the hyp. As such, page + * table walkers used in hyp do not call into RCU and instead use other + * synchronization mechanisms (such as a spinlock). + */ +#if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__) + typedef kvm_pte_t *kvm_pteref_t; static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared) @@ -32,6 +39,33 @@ static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared return pteref; } +static inline void kvm_pgtable_walk_begin(void) {} +static inline void kvm_pgtable_walk_end(void) {} + +#else + +typedef kvm_pte_t __rcu *kvm_pteref_t; + +static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared) +{ + if (shared) + return rcu_dereference(pteref); + + return rcu_dereference_raw(pteref); +} + +static inline void kvm_pgtable_walk_begin(void) +{ + rcu_read_lock(); +} + +static inline void kvm_pgtable_walk_end(void) +{ + rcu_read_unlock(); +} + +#endif + #define KVM_PTE_VALID BIT(0) #define KVM_PTE_ADDR_MASK GENMASK(47, PAGE_SHIFT) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 6b6e1ed7ee2f..c2be15850497 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -281,8 +281,13 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, .end = PAGE_ALIGN(walk_data.addr + size), .walker = walker, }; + int r; - return _kvm_pgtable_walk(pgt, &walk_data); + kvm_pgtable_walk_begin(); + r = _kvm_pgtable_walk(pgt, &walk_data); + kvm_pgtable_walk_end(); + + return r; } struct leaf_walk_data {