From patchwork Tue May 28 12:59:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13676666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C896CC27C4F for ; Tue, 28 May 2024 12:59:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=1WZ5sio0WRszhapxe7t4QDWX5iF0l+ipcifhEC5YTG8=; b=PACQHvZpiwdUJX1tJAY7chynzn fHlcXvBtU3eRDoTFBjD+7tMP4wgIOHKvj5VyRefCcF5pMinwN3a6n5G6EvYsKcZ4on5SjQcl5IctS I4YgisYRHFvaezccnajPjVAM1IhGGQt2ZYWv7MFGLvu/FUWZqr7cIJQ5X3PIvBWuWgBelcUGrvUOh zwtW05acyBQnINcRw9+ah3FPxoVDGmZIM5wIhrjsdxgCo/SCmjAqj76zdLdslh5hFEjlkJhIVDurk oCgkc5WTvIoSAooTvgVyAlEE+ejSNIW5wVGusm2a8unDb95QDKhqQt3AQqODPsWU0JkgYomvULzUB D4p/654Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBwQZ-00000000fkT-2Ezj; Tue, 28 May 2024 12:59:43 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBwQO-00000000fc9-038m for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 12:59:33 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-42120a4c56eso2729385e9.0 for ; Tue, 28 May 2024 05:59:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1716901168; x=1717505968; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P5sc8KUFKfboQi4h1WzkkHS0ylfTVYxfw83Z1i3NTag=; b=rzpzM6bRtqNypE+RooqNBE09pfhM+MhdHoKIGJA88EAFGznmvo8wcU2+qs1qKGna5Q KcQFeMuzQXmnz3XxZr7nfR502OJ3Pv4VfmsywXSdCNMDzhc1L+vDL22eCdy88Ci5UyAs J+P9MSLIF+wFntiWJYZ/Sutdk08jNEOn8tBzoU7CqkaSyZe1iyMdanqDfGsrKbvKKtme oTzdbyAQYTt1QjylRPStADdDA5duxFMGt7NCKwhUwmBA9O5YOVNIQ8O3DG0BlFqX8gaM i/65SML+hjXoeELCXq4F6CFkX/FqkEYbOPte7NPtAho+L5zNSsghia/LkPH+Q9FZE5dH VPUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716901168; x=1717505968; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P5sc8KUFKfboQi4h1WzkkHS0ylfTVYxfw83Z1i3NTag=; b=EqK3rUViwIhoGp4bZtSA0xWvL2pBa+6c+Hjh51GmmrmmfqOyZf+nDZ/5KpvJD0Z5a6 4TR0tcM84vIyCnKGB5biv7DZ13pjDGR669Gn084quZERFE1yjZo03Mit8VrRYRnUA2sC +mWL3iFQChJMx3Cd51tGsqlqK6Lk+ENojj19+ug9CRoaZj5GayqGNozkSV4Pi42+NeTi yIFhmNPya79IS7gelWs+To5EM03T19ZlvZz+Lhmt+YGIh4HqcAQq+Qi2YwLiC2vt9PO1 o3yhGOGMY1jdjNQ64cfLsSiGUOqlvJFukvhctqsaraDHjEPkCMik9Ey1i97HFR6XDvlL qDbQ== X-Forwarded-Encrypted: i=1; AJvYcCUDyojPHpfCPaqL1RgEl3fSOnuAyitGV69QMUgNEReXSANEwjiUssFxAZQHCOeZEzIg6ELrA3lnEBRS6vKaaI3Gsa8iipXgbS7JG8vWX80rKa/6oMQ= X-Gm-Message-State: AOJu0YxCJ6JrtW7+GzTD987B5uXUqTrYY2Uk8BVMgtTJIrit1VhCmOVQ aOvCLXYY4fRy8ZKQOrEGltu6nTqrbyHBJvS76VZQPT7mGApglHi5j2O9W6pLKC+9CK3S0pjtDg= = X-Google-Smtp-Source: AGHT+IFiwfSOYOPKUhpIMADgtckNl7QyiNv7yek943SC81Qp1wNAapp+xVORVv/TO9Ox6Jp1ed4+3UuuXg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:1390:b0:41f:acbd:d10 with SMTP id 5b1f17b1804b1-421081a2580mr2540845e9.1.1716901168549; Tue, 28 May 2024 05:59:28 -0700 (PDT) Date: Tue, 28 May 2024 13:59:08 +0100 In-Reply-To: <20240528125914.277057-1-tabba@google.com> Mime-Version: 1.0 References: <20240528125914.277057-1-tabba@google.com> X-Mailer: git-send-email 2.45.1.288.g0e0cd299f1-goog Message-ID: <20240528125914.277057-6-tabba@google.com> Subject: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, will@kernel.org, qperret@google.com, tabba@google.com, seanjc@google.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, philmd@linaro.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, mark.rutland@arm.com, broonie@kernel.org, joey.gouly@arm.com, rananta@google.com, yuzenghui@huawei.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_055932_209010_0279167D X-CRM114-Status: GOOD ( 20.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When running in protected mode we don't want to leak protected guest state to the host, including whether a guest has used fpsimd/sve. Therefore, eagerly restore the host state on guest exit when running in protected mode, which happens only if the guest has used fpsimd/sve. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 13 ++++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 67 +++++++++++++++++++++++-- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 + arch/arm64/kvm/hyp/nvhe/switch.c | 16 +++++- 4 files changed, 93 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index d3a3f1cee668..89c52b59d2a9 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -320,6 +320,17 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } +static inline void __hyp_sve_save_host(void) +{ + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); + + sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + isb(); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + &sve_state->fpsr); +} + static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu); /* @@ -354,7 +365,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) /* Valid trap. Switch the context: */ /* First disable enough traps to allow us to update the registers */ - if (sve_guest) + if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve())) cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN); else cpacr_clear_set(0, CPACR_ELx_FPEN); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index f71394d0e32a..1088b0bd3cc5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -23,20 +23,80 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) +{ + __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); + /* + * On saving/restoring guest sve state, always use the maximum VL for + * the guest. The layout of the data when saving the sve state depends + * on the VL, so use a consistent (i.e., the maximum) guest VL. + */ + sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); + isb(); + __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr); + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); +} + +static void __hyp_sve_restore_host(void) +{ + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); + + /* + * On saving/restoring host sve state, always use the maximum VL for + * the host. The layout of the data when saving the sve state depends + * on the VL, so use a consistent (i.e., the maximum) host VL. + * + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length + * supported by the system (or limited at EL3). + */ + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + isb(); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + &sve_state->fpsr); + write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); +} + +static void fpsimd_sve_flush(void) +{ + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; +} + +static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) +{ + if (!guest_owns_fp_regs()) + return; + + cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN); + isb(); + + if (vcpu_has_sve(vcpu)) + __hyp_sve_save_guest(vcpu); + else + __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs); + + if (system_supports_sve()) + __hyp_sve_restore_host(); + else + __fpsimd_restore_state(*host_data_ptr(fpsimd_state)); + + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; +} + static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) { struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + fpsimd_sve_flush(); + hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; - hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2; hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags; @@ -54,10 +114,11 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; unsigned int i; + fpsimd_sve_sync(&hyp_vcpu->vcpu); + host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt; host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2; - host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2; host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault; diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 25e9a94f6d76..feb27b4ce459 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -588,6 +588,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, if (ret) unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); + hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu); + return ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 019f863922fa..bef74de7065b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -184,7 +184,21 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) { - __fpsimd_save_state(*host_data_ptr(fpsimd_state)); + /* + * Non-protected kvm relies on the host restoring its sve state. + * Protected kvm restores the host's sve state as not to reveal that + * fpsimd was used by a guest nor leak upper sve bits. + */ + if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) { + __hyp_sve_save_host(); + + /* Re-enable SVE traps if not supported for the guest vcpu. */ + if (!vcpu_has_sve(vcpu)) + cpacr_clear_set(CPACR_ELx_ZEN, 0); + + } else { + __fpsimd_save_state(*host_data_ptr(fpsimd_state)); + } } static const exit_handler_fn hyp_exit_handlers[] = {