From patchwork Mon Jan 6 18:32:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13927750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BCDF6E77188 for ; Mon, 6 Jan 2025 18:33:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:Mime-Version:Date:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=mMBHoWGKqgX/Qc6KXJyNk8iaddCY/jFnFkKsjyLo+bs=; b=G+SE6fXBIvJhlDwIl8NAvgPIaU 7CBImZJ6hBDJWG0iNXVIivud0pVY3dM4XX5/Rd5XWI/r6mYaW3rLvOmI9SXRgRINvHyLkvOI1sf5e /9jwG+bMZxIQGugLcOmTwQ42myQ2T66wz3+7Jm9JzzTeWqw8ce8RDPg4u373yxDcuvUDF2S0rQUkF TVfrm6Ah+uODzdkN96tOElctujqcCVnaV8DCzj6ROL9eJRjzOkbHziLvGPF0CKAjRzgfRzCUF4GJY EPqe6VxvTxpQP1/0NlV4Oe5cJFcvP7G8v32D6mMOOzvM/fgqFh3vDAfYdVjoxACjbq1ns/UJMxv16 Vki8AljQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tUruv-00000002IrQ-23NR; Mon, 06 Jan 2025 18:33:33 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tUrtj-00000002IhF-15SB for linux-arm-kernel@lists.infradead.org; Mon, 06 Jan 2025 18:32:20 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-385ed79291eso7374398f8f.0 for ; Mon, 06 Jan 2025 10:32:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736188337; x=1736793137; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=mMBHoWGKqgX/Qc6KXJyNk8iaddCY/jFnFkKsjyLo+bs=; b=KRybCpJoSWp1NfchwX2frusVUCgJsIRa4OY6v0YDwGtOST0keI+RwqMCMWgy+ZobnR sVh02BOx+JUDio2VtjtT+crmlC1/Bs8cdcjesmgaFyNyuRZPcDauNVVnHvAFea9RwjfW PG6c4Y11vQltRhro5BjFiul6dSY4I1SsGBrYF9X4tdnbir6ExfxRuyq0hkIxw87A3xww maOnKItT8haemon5+j7WEggWfozQ1iEbPgEJc2CA9JSQH8B4jnJXvb2E87wff7aGrtTj Aub6Gdj0LMoxv1oRaYQHiFh98XNDakd2gecFzWVFFhXsxpEHnxfFVoyy5M9ONu6LOFZy Qs4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736188337; x=1736793137; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=mMBHoWGKqgX/Qc6KXJyNk8iaddCY/jFnFkKsjyLo+bs=; b=dnIReH9xPE6yszCfsYqH40boFV1xlzDjdinctCceZ39A+iD3DLvmNzuC82hf6rE/vL AvVrEz+ircRi0qTdrUfakdOfrvytvll7yU5gMpy/iSTZw7n/nIImRIBJWmnNOmR361Va zAJOb5DompfSUXQVCgAcUn80LPNVHhdbG1l0j0YxIKCQdGoLZH8Rj+dcWPkPWwMu3vc1 bbI/B8RqtpJ0QvOXiYkjAXViNGyDmBSarnbaIaNCAwRES7eULegTFATLvDlH3dqJhQaL zncd0oaeg0HRu0qM+mGMUrFw+e7Q/JoLCQ+FUamEUeo9IvHGqE0vcLWpXQHbKco2Z6T+ cybA== X-Forwarded-Encrypted: i=1; AJvYcCW7Yda8oB5AWJ+n3+fBgzEQQNm55drXgXZEChjXfD6WoqTD4kEFcG5QTTq2x4wsDkgyRxCz9n3N39MK6cvTLLzs@lists.infradead.org X-Gm-Message-State: AOJu0YwnQL/oNm3uo6IhlpcY9TVFZQt7wo6aQsKQGoOxEk85xdlzWixG 1QfL3PI1Tj0A3I/eEJCHdwHg+HdkruxGaGrJMD7zy7U+qzNckkIPRt7XCPbr5lMk5f81ISZRnZd QnYh3uyXx+4YgbUYboQ== X-Google-Smtp-Source: AGHT+IG5n82ztTIgQ8mNVnITN9/jqp+oTUnhPTHVxgRvKMcf4+SVIsSRmIj/SKnm+YL5qePo6RJCpb6nXQrhC9Pv X-Received: from wmqb1.prod.google.com ([2002:a05:600c:4e01:b0:434:ef30:4be3]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1446:b0:388:cacf:24b0 with SMTP id ffacd0b85a97d-38a791253dbmr235580f8f.2.1736188336997; Mon, 06 Jan 2025 10:32:16 -0800 (PST) Date: Mon, 6 Jan 2025 18:32:13 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250106183213.4094616-1-vdonnefort@google.com> Subject: [PATCH] KVM: arm64: Fix nVHE stacktrace VA bits mask From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev Cc: kvmarm@lists.linux.dev, kernel-team@android.com, linux-arm-kernel@lists.infradead.org, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250106_103219_304944_19468877 X-CRM114-Status: GOOD ( 16.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor VA space size depends on both the ID map's (IDMAP_VA_BITS) and the kernel stage-1 (VA_BITS). When VA_BITS is smaller than IDMAP_VA_BITS (i.e. 39-bit), the stacktrace can contain addresses bigger than the current VA_BITS mask. As the hyp_va_bits value needs to be used outside of the init code now, use a global variable, shared by all the kvm users in mmu.c, arm.c and now stacktrace.c. Signed-off-by: Vincent Donnefort base-commit: 13563da6ffcf49b8b45772e40b35f96926a7ee1e diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 66d93e320ec8..8195a77056a9 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -139,6 +139,8 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v)))) +extern u32 hyp_va_bits; + /* * We currently support using a VM-specified IPA size. For backward * compatibility, the default IPA size is fixed to 40bits. @@ -182,7 +184,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); -int __init kvm_mmu_init(u32 *hyp_va_bits); +int __init kvm_mmu_init(void); static inline void *__kvm_vector_slot2addr(void *base, enum arm64_hyp_spectre_vector slot) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..90d28c35c5b5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1987,7 +1987,7 @@ static int kvm_init_vector_slots(void) return 0; } -static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) +static void __init cpu_prepare_hyp_mode(int cpu) { struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); @@ -2351,7 +2351,7 @@ static void __init teardown_hyp_mode(void) } } -static int __init do_pkvm_init(u32 hyp_va_bits) +static int __init do_pkvm_init(void) { void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)); int ret; @@ -2412,7 +2412,7 @@ static void kvm_hyp_init_symbols(void) kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits; } -static int __init kvm_hyp_init_protection(u32 hyp_va_bits) +static int __init kvm_hyp_init_protection(void) { void *addr = phys_to_virt(hyp_mem_base); int ret; @@ -2421,7 +2421,7 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits) if (ret) return ret; - ret = do_pkvm_init(hyp_va_bits); + ret = do_pkvm_init(); if (ret) return ret; @@ -2505,7 +2505,6 @@ static void pkvm_hyp_init_ptrauth(void) /* Inits Hyp-mode on all online CPUs */ static int __init init_hyp_mode(void) { - u32 hyp_va_bits; int cpu; int err = -ENOMEM; @@ -2519,7 +2518,7 @@ static int __init init_hyp_mode(void) /* * Allocate Hyp PGD and setup Hyp identity mapping */ - err = kvm_mmu_init(&hyp_va_bits); + err = kvm_mmu_init(); if (err) goto out_err; @@ -2633,7 +2632,7 @@ static int __init init_hyp_mode(void) } /* Prepare the CPU initialization parameters */ - cpu_prepare_hyp_mode(cpu, hyp_va_bits); + cpu_prepare_hyp_mode(cpu); } kvm_hyp_init_symbols(); @@ -2654,7 +2653,7 @@ static int __init init_hyp_mode(void) if (err) goto out_err; - err = kvm_hyp_init_protection(hyp_va_bits); + err = kvm_hyp_init_protection(); if (err) { kvm_err("Failed to init hyp memory protection\n"); goto out_err; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..62a99c86cd1d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -29,6 +29,8 @@ static unsigned long __ro_after_init hyp_idmap_start; static unsigned long __ro_after_init hyp_idmap_end; static phys_addr_t __ro_after_init hyp_idmap_vector; +u32 __ro_after_init hyp_va_bits; + static unsigned long __ro_after_init io_map_base; static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, @@ -1986,7 +1988,7 @@ static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = { .virt_to_phys = kvm_host_pa, }; -int __init kvm_mmu_init(u32 *hyp_va_bits) +int __init kvm_mmu_init(void) { int err; u32 idmap_bits; @@ -2020,9 +2022,9 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) */ idmap_bits = IDMAP_VA_BITS; kernel_bits = vabits_actual; - *hyp_va_bits = max(idmap_bits, kernel_bits); + hyp_va_bits = max(idmap_bits, kernel_bits); - kvm_debug("Using %u-bit virtual addresses at EL2\n", *hyp_va_bits); + kvm_debug("Using %u-bit virtual addresses at EL2\n", hyp_va_bits); kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); kvm_debug("HYP VA range: %lx:%lx\n", kern_hyp_va(PAGE_OFFSET), @@ -2047,7 +2049,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) goto out; } - err = kvm_pgtable_hyp_init(hyp_pgtable, *hyp_va_bits, &kvm_hyp_mm_ops); + err = kvm_pgtable_hyp_init(hyp_pgtable, hyp_va_bits, &kvm_hyp_mm_ops); if (err) goto out_free_pgtable; diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c index 3ace5b75813b..ef7a22598d89 100644 --- a/arch/arm64/kvm/stacktrace.c +++ b/arch/arm64/kvm/stacktrace.c @@ -19,6 +19,7 @@ #include #include +#include #include static struct stack_info stackinfo_get_overflow(void) @@ -145,7 +146,7 @@ static void unwind(struct unwind_state *state, */ static bool kvm_nvhe_dump_backtrace_entry(void *arg, unsigned long where) { - unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + unsigned long va_mask = GENMASK_ULL(hyp_va_bits - 1, 0); unsigned long hyp_offset = (unsigned long)arg; /* Mask tags and convert to kern addr */