From patchwork Fri May 10 11:26:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Pierre-Cl=C3=A9ment_Tosi?= X-Patchwork-Id: 13661433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90053C25B5F for ; Fri, 10 May 2024 11:27:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=dyi4MUei8EgEURHfIeKLIWoznIraBiyF54mkHrMUPE0=; b=wuyIV8jIHEAAwb2J9iIa8IFaX1 ul3dvuTEj4fFj09QKljkSwlMrZSQYfNFtgIRM5hsCAyvemFcQfB8AmwPngQXV8JhslnuqYh/9/nyv sfSHx57gQWEFkdmMz8LtMqp8jRTQ8bqLKI8PQ5MqBctOvB2vxcxSL1ENZ/lXiuCDOXGrdcyFHHiI+ yEga9jQK05ctA/GdtUFBJaP4FevlhyTn8sj+5RPwWz84CpiyCtO1XusHjStfsCEfwQYALPXDK+K66 VGCQpIJwN/vAQ/aDnydDaPV0nCaSYngsix3MqH6qOe9McwET6PCF0RyNH4XdKFWncW2xMW/j964OF 2yqEuOsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s5OPa-0000000536p-0Vns; Fri, 10 May 2024 11:27:38 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s5OPV-0000000531J-1u5r for linux-arm-kernel@lists.infradead.org; Fri, 10 May 2024 11:27:35 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-a59c69844aaso112190466b.0 for ; Fri, 10 May 2024 04:27:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1715340448; x=1715945248; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=gLg0BCgwrJPFXcSMjQCXQoWFMEc6ibsqpEm8oPH+eqc=; b=sW0CdvCvXMTzuqjJVHq3U1/Syv6L52hVamqfxtCUkfHKJvVnTYs7JmsAxoy6A5RRsP xJvYfaTNTsuAdf52uPHj5AzJdajO1j2PA5DtZS2CB8cz5k/EIxIu3TN6gdmb5ybm435S 6hcwNzcT7qdqKv4QxIp+Q2VrS3lIvRyHy9m5UL53MWqvqngkuDpdToPVoU4TkIQ57lhI 2dh+muUng+OzEWwWu8MHevfefUeYqGJzNqU6b0DNZAyzzB9gNuygORlEOa5uDEot2aBQ lcWd7uI1k0yNU194BR0sZreU32xe44wS8Gqpzu4j5Uyaf4TMhQ1cKI8HK2xnYNXAlXj8 qbOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715340448; x=1715945248; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=gLg0BCgwrJPFXcSMjQCXQoWFMEc6ibsqpEm8oPH+eqc=; b=TH4FUZN3T7gcB2ZU1Xmu01n5ZYkO0Vymc2n1UmvFKHlemg+gpfjjiBU9XTIJP3cgT/ v3IYyiFNKYTfrLAxRhrX3cWbno4+1LSYWPJymR3NRxdS7b2Puw3Pw/FCRpiUAWmxTDHb ggVdS1+/AuqF/OSvJhS2JsznuG2n8bWQW3MtcNIS74tVKwJyWSIcL3kzs9TJwX90MYMg QmtLrQJagj0cc8WiQKvbrXTW0T2PYniVD0TqY8mz9EpxzGTdX5uEwZ4QMsLHFuxzuEsi MHfWQA3tYHBr8vpvUp5moJq7t1qh+n9dd+IaRijcvs8+b1cW789DsfMMmKDvjkam/joC KCgA== X-Forwarded-Encrypted: i=1; AJvYcCUcoGzoUAjMUq56DvwjHRkCcK+FTy6LtPbjLjYT8i4Wjl6TBB9Hc+gCoO8zAaPU+8vI47LJOdb8q8rTXn5LvcTcMGXTR05w/La9XUpj7A8pXqB7a+c= X-Gm-Message-State: AOJu0YxYqyjgNLuXkIyADcvyQFgNhq6H2DwDhNtBeC1dLhc84sFeemj1 bfFMM/96Ul0JpYgD8F9CrPS91YIqiFSUGp9fkuLjM5rv1B4H8HT1WPbfZ/yqG6i/TyLS9iqwZA= = X-Google-Smtp-Source: AGHT+IFepWD3/v8BdAdYkb3xCeH3t+IvJIqzRXw8IRofPX/9PW2hDEA+udTBjtBXHF9E/IajiV/5ddxCaw== X-Received: from ptosi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:11ec]) (user=ptosi job=sendgmr) by 2002:a17:907:12c9:b0:a59:d5f7:e56c with SMTP id a640c23a62f3a-a5a2d58a6f4mr225866b.5.1715340447570; Fri, 10 May 2024 04:27:27 -0700 (PDT) Date: Fri, 10 May 2024 12:26:33 +0100 In-Reply-To: <20240510112645.3625702-1-ptosi@google.com> Mime-Version: 1.0 References: <20240510112645.3625702-1-ptosi@google.com> X-Mailer: git-send-email 2.45.0.118.g7fe29c98d7-goog Message-ID: <20240510112645.3625702-5-ptosi@google.com> Subject: [PATCH v3 04/12] KVM: arm64: nVHE: Remove __guest_exit_panic path From: " =?utf-8?q?Pierre-Cl=C3=A9ment_Tosi?= " To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: " =?utf-8?q?Pierre-Cl=C3=A9ment_Tosi?= " , Marc Zyngier , Oliver Upton , Suzuki K Poulose , Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240510_042733_634494_0CD4BCEB X-CRM114-Status: GOOD ( 16.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In invalid_host_el2_vect (i.e. EL2{t,h} handlers in nVHE guest context), remove the duplicate vCPU context check that __guest_exit_panic also performs, allowing an unconditional branch to it. Rename __guest_exit_panic to __hyp_panic to better reflect that it might not exit through the guest but will always (directly or indirectly) end up executing hyp_panic(). Fix its wrong (probably bitrotten) ABI doc to reflect the ABI expected by VHE and (now) nVHE. Use CPU_LR_OFFSET to clarify that the routine returns to hyp_panic(). Restore x0, x1 before calling hyp_panic when __hyp_panic is executed in host context (i.e. called from __kvm_hyp_vector). Signed-off-by: Pierre-Clément Tosi --- arch/arm64/kvm/hyp/entry.S | 14 +++++++++----- arch/arm64/kvm/hyp/hyp-entry.S | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/host.S | 8 +------- 4 files changed, 13 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index bcaaf1a11b4e..6a1ce9d21e5b 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -83,7 +83,7 @@ alternative_else_nop_endif eret sb -SYM_INNER_LABEL(__guest_exit_restore_elr_and_panic, SYM_L_GLOBAL) +SYM_INNER_LABEL(__hyp_restore_elr_and_panic, SYM_L_GLOBAL) // x0-x29,lr: hyp regs stp x0, x1, [sp, #-16]! @@ -92,13 +92,15 @@ SYM_INNER_LABEL(__guest_exit_restore_elr_and_panic, SYM_L_GLOBAL) msr elr_el2, x0 ldp x0, x1, [sp], #16 -SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) - // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack +SYM_INNER_LABEL(__hyp_panic, SYM_L_GLOBAL) + // x0-x29,lr: vcpu regs + + stp x0, x1, [sp, #-16]! // If the hyp context is loaded, go straight to hyp_panic get_loaded_vcpu x0, x1 cbnz x0, 1f + ldp x0, x1, [sp], #16 b hyp_panic 1: @@ -110,10 +112,12 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) // accurate if the guest had been completely restored. adr_this_cpu x0, kvm_hyp_ctxt, x1 adr_l x1, hyp_panic - str x1, [x0, #CPU_XREG_OFFSET(30)] + str x1, [x0, #CPU_LR_OFFSET] get_vcpu_ptr x1, x0 + // Keep x0-x1 on the stack for __guest_exit + SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // x0: return code // x1: vcpu diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 03f97d71984c..7e65ef738ec9 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -122,7 +122,7 @@ el2_error: eret sb -.macro invalid_vector label, target = __guest_exit_panic +.macro invalid_vector label, target = __hyp_panic .align 2 SYM_CODE_START_LOCAL(\label) b \target diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 19a7ca2c1277..9387e3a0b680 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -753,7 +753,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) static inline void __kvm_unexpected_el2_exception(void) { - extern char __guest_exit_restore_elr_and_panic[]; + extern char __hyp_restore_elr_and_panic[]; unsigned long addr, fixup; struct kvm_exception_table_entry *entry, *end; unsigned long elr_el2 = read_sysreg(elr_el2); @@ -776,7 +776,7 @@ static inline void __kvm_unexpected_el2_exception(void) /* Trigger a panic after restoring the hyp context. */ this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2; - write_sysreg(__guest_exit_restore_elr_and_panic, elr_el2); + write_sysreg(__hyp_restore_elr_and_panic, elr_el2); } #endif /* __ARM64_KVM_HYP_SWITCH_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 135cfb294ee5..7397b4f1838a 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -196,19 +196,13 @@ SYM_FUNC_END(__host_hvc) tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp - /* If a guest is loaded, panic out of it. */ - stp x0, x1, [sp, #-16]! - get_loaded_vcpu x0, x1 - cbnz x0, __guest_exit_panic - add sp, sp, #16 - /* * The panic may not be clean if the exception is taken before the host * context has been saved by __host_exit or after the hyp context has * been partially clobbered by __host_enter. */ - b hyp_panic + b __hyp_panic .L__hyp_sp_overflow\@: /* Switch to the overflow stack */