From patchwork Fri Aug 6 11:31:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12423291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6591C4338F for ; Fri, 6 Aug 2021 11:33:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6ADF961050 for ; Fri, 6 Aug 2021 11:33:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6ADF961050 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2iWyPd0I3KAYwUYac6/brU6Y7Dq233wYFqUyuBDy2s8=; b=TdG3I9L9W9l9ZL e8bQ4BAxrJf0aLPd9lwU4mf+3fkVnQuYEdslfmDYP4jySaY2/IGDXRc8kf9ciMH2z1Q5AHST+mmI6 Oi75CLVj6kSqLjJ+wjUsg2frX6Zn60Oh9fKbufMTmydqT8UmBTp3o5QfLiFjpM99Vz9ljAdNn6G/w GxQFWlS9o+MKg0bxM0GaVfpOJJ5aTHoQ+zTy8IVjAoXl/XZho8M2mBe0iEJ2004wzQ6bdbrKo2zYN Sd5Majmer/11C7t2S6TkwFIC8euyK445uLelx9C33EZCw1pmtIULGBJ1LCSgWpLKLC44npLfzX8QU NgzTNzP9MkmwbaMJ3u3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4f-00CHqS-4C; Fri, 06 Aug 2021 11:31:37 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4R-00CHly-G9 for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 11:31:25 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8259961058; Fri, 6 Aug 2021 11:31:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628249483; bh=mo5wDPhEHYGqKKM336AjWzR67zEBISf0qmWTSG4w3FU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sOW8954Np0TRki2NQdRgjPDxmAZzhqGNSxZXmycNwYPxzXwddUObeqilT03yvpj1T lKefspovDCqksqUaPtilAvlMfADzP/44EdArMZh7ECFvLrJdui1+jOMdCKfZVZy4qD fxXQK/7nzgDNYuxMmDTEi39066TBaVNpzhmWG2mqrfUhE5WcS3LzMs0ns82bs4CJ7q BI44cpMQf3KErVQMPnx5QaeTh1mDPqNCKPI6CfsCBZBH/KpQWQk+cWi7JPm2ulIGkN PDEUuFFHlIG9o1SujCaSVINBHOXjmEU6A1AapxAY5tf+Kn6eNSnM9PBiq5aqERLJCs +e3F+WCTV8qFQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Date: Fri, 6 Aug 2021 12:31:04 +0100 Message-Id: <20210806113109.2475-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210806113109.2475-1-will@kernel.org> References: <20210806113109.2475-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_043123_614208_E00EB75B X-CRM114-Status: GOOD ( 19.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: Cc: Catalin Marinas Cc: Marc Zyngier Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/mmu.h | 29 +++++++++++++++++++++++++---- arch/arm64/include/asm/tlbflush.h | 11 ++++++----- 2 files changed, 31 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 75beffe2ee8a..e9c30859f80c 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -27,11 +27,32 @@ typedef struct { } mm_context_t; /* - * This macro is only used by the TLBI and low-level switch_mm() code, - * neither of which can race with an ASID change. We therefore don't - * need to reload the counter using atomic64_read(). + * We use atomic64_read() here because the ASID for an 'mm_struct' can + * be reallocated when scheduling one of its threads following a + * rollover event (see new_context() and flush_context()). In this case, + * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush()) + * may use a stale ASID. This is fine in principle as the new ASID is + * guaranteed to be clean in the TLB, but the TLBI routines have to take + * care to handle the following race: + * + * CPU 0 CPU 1 CPU 2 + * + * // ptep_clear_flush(mm) + * xchg_relaxed(pte, 0) + * DSB ISHST + * old = ASID(mm) + * | + * | new = new_context(mm) + * \-----------------> atomic_set(mm->context.id, new) + * cpu_switch_mm(mm) + * // Hardware walk of pte using new ASID + * TLBI(old) + * + * In this scenario, the barrier on CPU 0 and the dependency on CPU 1 + * ensure that the page-table walker on CPU 1 *must* see the invalid PTE + * written by CPU 0. */ -#define ASID(mm) ((mm)->context.id.counter & 0xffff) +#define ASID(mm) (atomic64_read(&(mm)->context.id) & 0xffff) static inline bool arm64_kernel_unmapped_at_el0(void) { diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index cc3f5a33ff9c..36f02892e1df 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -245,9 +245,10 @@ static inline void flush_tlb_all(void) static inline void flush_tlb_mm(struct mm_struct *mm) { - unsigned long asid = __TLBI_VADDR(0, ASID(mm)); + unsigned long asid; dsb(ishst); + asid = __TLBI_VADDR(0, ASID(mm)); __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); @@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm) static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, unsigned long uaddr) { - unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + unsigned long addr; dsb(ishst); + addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); } @@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, { int num = 0; int scale = 0; - unsigned long asid = ASID(vma->vm_mm); - unsigned long addr; - unsigned long pages; + unsigned long asid, addr, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -305,6 +305,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, } dsb(ishst); + asid = ASID(vma->vm_mm); /* * When the CPU does not support TLB range operations, flush the TLB From patchwork Fri Aug 6 11:31:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12423293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B55D8C4338F for ; Fri, 6 Aug 2021 11:33:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8002360FE7 for ; Fri, 6 Aug 2021 11:33:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8002360FE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=V/vrm/eSoiS/YF3rLk8aCaZxenwwIayGFwb+bLSCXrQ=; b=bK/N+1jGswS+Y9 epGoBFr1j0/yr7hIYPU8I+rxTgWIb0OuNZzDWVvJrXFfSgV0/tuZWX3gS9jdv4KSwXdHvuxpwDzln CeZXDaJvd1aLrsCqnwuVdPhYncrlRLpl2YarX1fK7tT/i97FS/EaksOV8HdJd1WA3/wXlN2pCaWFO JIpRdz0lNClqStvga7rUwgm3hT7edvI3pzHQltXLvgBj6FKzzv0YeeO1UyDWupC60xH5pgNIzEEXq 4gj4jy5IQFh0BJPFCrcEJne8NIGF/G8W1uzB4vf1yk/DNQ+D9cqZaBfbBET1fManHwWHryjMVwqzz TGA2dughBlXnzhKaY21w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy53-00CHyE-0V; Fri, 06 Aug 2021 11:32:01 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4W-00CHno-Ad for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 11:31:29 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5ED5760FE7; Fri, 6 Aug 2021 11:31:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628249487; bh=giYlm+RhVr3oiAFmcpa/YUkxPfS6Fn6zcEf0IHpmalk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kml46dOdNnplKZ8Eq4KYTWTyVgZq6HUOzCuIZGrTK3n1kpgcnf4PVDzXrVhW6A18E 6+Ec1pFFxG14gMt6O8curxl4c5KPRFBM0xC578fEn7n+M+W3W8S5Jv0sY9KBSu+JUc wjuGgFfnT6doYQMMB6hy/Cm68oN5FsRiiMpBVuWmEgwTws63RZtmvbkcsLLMKXN6Rm T4cQ3hFJgr5zRsDYkZyoXZELys5PVjGhrL4ewqNdL66d/rFMMr8N3skUShoVztbSP9 oanMN00X914Ugq8dANH2igPaLBS3W1BRoFvdZSQzcAus6KNm6HZdlwp59JJIxAKLuV glszAtfkO0wmA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers Date: Fri, 6 Aug 2021 12:31:06 +0100 Message-Id: <20210806113109.2475-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210806113109.2475-1-will@kernel.org> References: <20210806113109.2475-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_043128_477334_9856D462 X-CRM114-Status: GOOD ( 13.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier It is a bit awkward to use kern_hyp_va() in __load_guest_stage2(), specially as the helper is shared between VHE and nVHE. Instead, move the use of kern_hyp_va() in the nVHE code, and pass a pointer to the kvm->arch structure instead. Although this may look a bit awkward, it allows for some further simplification. Cc: Catalin Marinas Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_mmu.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/nvhe/tlb.c | 2 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- arch/arm64/kvm/hyp/vhe/tlb.c | 2 +- 5 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b52c5c4b9a3d..05e089653a1a 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -280,9 +280,10 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu) +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, + struct kvm_arch *arch) { - __load_stage2(mmu, kern_hyp_va(mmu->arch)->vtcr); + __load_stage2(mmu, arch->vtcr); } static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..e50a49082923 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -170,6 +170,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + struct kvm_s2_mmu *mmu; bool pmu_switch_needed; u64 exit_code; @@ -213,7 +214,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg32_restore_state(vcpu); __sysreg_restore_state_nvhe(guest_ctxt); - __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); + mmu = kern_hyp_va(vcpu->arch.hw_mmu); + __load_guest_stage2(mmu, kern_hyp_va(mmu->arch)); __activate_traps(vcpu); __hyp_vgic_restore_state(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 38ed0f6f2703..76229407d8f0 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -39,7 +39,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, * ensuring that we always have an ISB, but not two ISBs back * to back. */ - __load_guest_stage2(mmu); + __load_guest_stage2(mmu, kern_hyp_va(mmu->arch)); asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index b3229924d243..0cb7523a501a 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -128,7 +128,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) * __load_guest_stage2 configures stage 2 translation, and * __activate_traps clear HCR_EL2.TGE (among other things). */ - __load_guest_stage2(vcpu->arch.hw_mmu); + __load_guest_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); __activate_traps(vcpu); __kvm_adjust_pc(vcpu); diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 66f17349f0c3..5e9fb3989e0b 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -53,7 +53,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, * place before clearing TGE. __load_guest_stage2() already * has an ISB in order to deal with this. */ - __load_guest_stage2(mmu); + __load_guest_stage2(mmu, mmu->arch); val = read_sysreg(hcr_el2); val &= ~HCR_TGE; write_sysreg(val, hcr_el2); From patchwork Fri Aug 6 11:31:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12423297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97FE3C4338F for ; Fri, 6 Aug 2021 11:33:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5FA6761050 for ; Fri, 6 Aug 2021 11:33:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5FA6761050 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=F8OFOtPV6yPpvP+B8vSYs+ngUN6lPI6RfzB1NvZeIkQ=; b=FUWn7Nq0YNAId0 D8AdjdnpAhDkK34AwK7Z2TV2c3lwUNnftgTRsRfdnZZ0s9fHhGALuci9XnmyXWSvnLBHZv7ExDgJ7 +tVyY68PnPosg3IZdaVdwMjIRKz9+lg90uw8kw6AKe+lL+RWpu4b+1MRK5tFejc2e6YyLXfd2P5rp 2fWuxStG+zyLtDqrIj9BTarSy19X4qJyaMPCwR8hoHSxTFBw+Dp0NQwRmxAIc9/G3lPZ0IXMUDDYP gQMTzMI7wp0rrZMWBCe5byjWmtQyhO7G+vJ8FgH7KKyzMonxhRpcXOJ8UmnjrYbIf1e6JU54Flp2q 0NCB+aV76sCZtP9sv0Ww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy5F-00CI2j-6S; Fri, 06 Aug 2021 11:32:13 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4Y-00CHoZ-6v for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 11:31:31 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 593FB61058; Fri, 6 Aug 2021 11:31:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628249489; bh=5NQN7uGLAAQxZwXl2h2sRA+r5PlFi6czNdWb7Ldo0CU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nj2fwVLS/etY4Lsdgf4m0PUn2NfmLHD0YQ6fnapugiyDw3izp6RC4KBDavpoenBG/ Gw9RkCNqMcN9WFc7i/KQie2pPZiC7qO84ipye39uURApw4od+viZTk4z6ysGMlDCvk UVUWv5L/0MuYVO5+6SFSPxt6msfyVhDa/Wr1YhINJMx6eVB3tQIYNyVZmemZcwQUvF wRGjK6RpHpVseLfWwTvgeq0u/Lg00YwcQ3ObwlBf50Ku2We7tey3LkEUkJ9X0CXKrK rRdeTjcA0uf949aDf0Jw/S5xhtqxJ5HGCi6iPKh24QE7+C/xbavxo0d1p8W1+9STlC TLUlq9O3ut5eQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Date: Fri, 6 Aug 2021 12:31:07 +0100 Message-Id: <20210806113109.2475-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210806113109.2475-1-will@kernel.org> References: <20210806113109.2475-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_043130_309140_A2628702 X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier The protected mode relies on a separate helper to load the S2 context. Move over to the __load_guest_stage2() helper instead. Cc: Catalin Marinas Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_mmu.h | 11 +++-------- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- 3 files changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 05e089653a1a..934ef0deff9f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) * Must be called from hyp code running at EL2 with an updated VTTBR * and interrupts disabled. */ -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, + struct kvm_arch *arch) { - write_sysreg(vtcr, vtcr_el2); + write_sysreg(arch->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, - struct kvm_arch *arch) -{ - __load_stage2(mmu, arch->vtcr); -} - static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) { return container_of(mmu->arch, struct kvm, arch); diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 9c227d87c36d..a910648bc71b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); else write_sysreg(0, vttbr_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d938ce95d3bd..d4e74ca7f876 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void) kvm_flush_dcache_to_poc(params, sizeof(*params)); write_sysreg(params->hcr_el2, hcr_el2); - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); /* * Make sure to have an ISB before the TLB maintenance below but only From patchwork Fri Aug 6 11:31:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12423299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1FA0C4338F for ; Fri, 6 Aug 2021 11:34:30 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 97EDF61050 for ; Fri, 6 Aug 2021 11:34:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 97EDF61050 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HfmhuTOxe5YFILEpF55SfZ56EZgm/G/mmfizngCFLf8=; b=pGX+6FLNIgxPPc qZfEoN4hNQ/+X9ODGvh3tkzbGwiWZRtODcCdDXOSvMSEpCQhDA0MdAOin67h6XV5BkYnXhvoSQ6Ch n+Mzef4yuiBmYyW8JlpuJW3B7okpFMVjpS75q8vXaUT0boGGkhKFx1WIRlD7w+SrS8tF2hsQokR74 XFEKaDao4zuYtiKuYWYuGqA+nRwRDF1aewuxNkze8KsO+Odn5QtSyH39uDu5djJFYfvmAzaJuy1Rz YojIJfcbTWXrSBBc/SljezlRvfzo4py0AgPPUK61YxrnxF518UVJOFYedG/vfGpNJCfRuoxCoEIAe 3IkusTMRFRiwhyUa5gdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy5T-00CI8Y-Dd; Fri, 06 Aug 2021 11:32:27 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4a-00CHpJ-6x for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 11:31:33 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5675761050; Fri, 6 Aug 2021 11:31:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628249491; bh=M9UB+bjgdI6GkTumogq6PmtB5wmIP3agM0PRxyG7pzs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d99YBEzt7kgHJGXAqHCa0xdR9wp3Ia4JNvUYZgbwVAnr1+j3UI7DUz5N/PnFO6Dvg G/Lel3sJ23OXZk2Rqp91807gWTJ4dR+VLIEOnSvxWOGZeztENE1OjfljYjyA/G3ODJ H6ZBhMl8nIivSnxz9anaDw2QHJorMpPn+SZSFQID2kI0OWOUAblpuTjb86R1qCQuCn WHzUrXMvfiGIu9Z7hJMuCa5zZDQ9LcsPsrWomvQtBQvamPwpSj23f1IKKIyIH+H6PS aY3vnrawjp509cZqN4+n3o0fiBn8yWn0OZpnCN51tw7UuA/BV5ohyjzaJHtvTfbsNr KADY24c/a0nSw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Date: Fri, 6 Aug 2021 12:31:08 +0100 Message-Id: <20210806113109.2475-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210806113109.2475-1-will@kernel.org> References: <20210806113109.2475-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_043132_342278_4482A24E X-CRM114-Status: GOOD ( 13.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Since TLB invalidation can run in parallel with VMID allocation, we need to be careful and avoid any sort of load/store tearing. Use {READ,WRITE}_ONCE consistently to avoid any surprise. Cc: Catalin Marinas Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Reviewed-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++-- arch/arm64/kvm/mmu.c | 2 +- 4 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 934ef0deff9f..5828dd8fa738 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -252,6 +252,11 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa, #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr) +/* + * When this is (directly or indirectly) used on the TLB invalidation + * path, we rely on a previously issued DSB so that page table updates + * and VMID reads are correctly ordered. + */ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) { struct kvm_vmid *vmid = &mmu->vmid; @@ -259,7 +264,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0; baddr = mmu->pgd_phys; - vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT; + vmid_field = (u64)READ_ONCE(vmid->vmid) << VTTBR_VMID_SHIFT; return kvm_phys_to_vttbr(baddr) | vmid_field | cnp; } diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e9a2b8f27792..658f76067f46 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -571,7 +571,7 @@ static void update_vmid(struct kvm_vmid *vmid) kvm_call_hyp(__kvm_flush_vm_context); } - vmid->vmid = kvm_next_vmid; + WRITE_ONCE(vmid->vmid, kvm_next_vmid); kvm_next_vmid++; kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d4e74ca7f876..55ae97a144b8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -109,8 +109,8 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) mmu->pgd_phys = __hyp_pa(host_kvm.pgt.pgd); mmu->arch = &host_kvm.arch; mmu->pgt = &host_kvm.pgt; - mmu->vmid.vmid_gen = 0; - mmu->vmid.vmid = 0; + WRITE_ONCE(mmu->vmid.vmid_gen, 0); + WRITE_ONCE(mmu->vmid.vmid, 0); return 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3155c9e778f0..b1a6eaec28ff 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -485,7 +485,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) mmu->arch = &kvm->arch; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); - mmu->vmid.vmid_gen = 0; + WRITE_ONCE(mmu->vmid.vmid_gen, 0); return 0; out_destroy_pgtable: