From patchwork Thu Oct 12 10:41:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2132F602BF for ; Thu, 12 Oct 2017 10:43:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1390228BEC for ; Thu, 12 Oct 2017 10:43:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0847328D6F; Thu, 12 Oct 2017 10:43:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E05328D4E for ; Thu, 12 Oct 2017 10:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755717AbdJLKnF (ORCPT ); Thu, 12 Oct 2017 06:43:05 -0400 Received: from mail-wm0-f43.google.com ([74.125.82.43]:54907 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756691AbdJLKmU (ORCPT ); Thu, 12 Oct 2017 06:42:20 -0400 Received: by mail-wm0-f43.google.com with SMTP id i124so12076980wmf.3 for ; Thu, 12 Oct 2017 03:42:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Eo6lRMcsPyO7g239y6Xu4zwg8Hh4eaqpyPgAn96vl7c=; b=WS0UqpfXYOX/BMBIz2eC+M2azupgndACdPIKSHWEFhNDPOdQlO+MjSTzQ+SDg7IwWV fw+pYmevctgZKMGx+bKU1oWtBlGC8TwxW3C1IDFbQwZWKzQ/9VV9edUSf7NAjNo5yH6E mhwFNS2v2htxCwON5FsfGEI9+QkYgNFx0pMuA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Eo6lRMcsPyO7g239y6Xu4zwg8Hh4eaqpyPgAn96vl7c=; b=VXhYJ+GvIQkgC++vrejksllis7ibLZug517CR1STXxUEQlFucVV35BF8d+BLgIf3i8 KqcOCmMUl1VDEQgXKtbAApCNEVA5c/V8oAeGQ5Ll/C3rFPex/JjTC6BuRsqj6dMqYwBo q+a9f0N3EiHPdnRMVC6ueLzdCqzAGlA1185Wy6StIDfEyFVlTO6x7s5CBT8BFZwkpWJ0 sv2P2jxT3AwSFHMxtw5zMKTdKuUtdjunFe614OU+TgjpOaGfwhirKXo3RvjWBoQ8Of9+ jF1G9vzuE21aMQKED3pq/8Y91l91026Oo0mqH2pl6yPOiNRlJojlEKrNrN/OjTXp7pwW JH/w== X-Gm-Message-State: AMCzsaXbB1H9Djjvwl4gEqow1gI2VPbVr+RKY2dyMvmRLrv/MYSbP71t jOG3a0/pKH4iQnYF+Koe05+vZw== X-Google-Smtp-Source: AOwi7QB0+eEYy9+tL6luPLyFrR9J7Yc0vPFpq0IGRn9KLYuJVkKzGVWM7eT+MswYIHYEVaC3Jdu9dA== X-Received: by 10.80.167.193 with SMTP id i59mr2502152edc.173.1507804938782; Thu, 12 Oct 2017 03:42:18 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.42.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:42:17 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Christoffer Dall Subject: [PATCH 27/37] KVM: arm64: Defer saving/restoring system registers to vcpu load/put on VHE Date: Thu, 12 Oct 2017 12:41:31 +0200 Message-Id: <20171012104141.26902-28-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some system registers do not affect the host kernel's execution and can therefore be loaded when we are about to run a VCPU and we don't have to restore the host state to the hardware before the time when we are actually about to return to userspace or schedule out the VCPU thread. The EL1 system registers and the userspace state registers, which only affect EL0 execution, do not affect the host kernel's execution. The 32-bit system registers are not used by a VHE host kernel and therefore don't need to be saved/restored on every entry/exit to/from the guest, but can be deferred to vcpu_load and vcpu_put, respectively. We have already prepared the trap handling code which accesses any of these registers to directly access the registers on the physical CPU or to sync the registers when needed. Signed-off-by: Christoffer Dall --- arch/arm64/kvm/hyp/switch.c | 6 ------ arch/arm64/kvm/hyp/sysreg-sr.c | 46 ++++++++++++++++++++++++++++++++++-------- 2 files changed, 38 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 6356bec..6a12504 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -337,11 +337,6 @@ int kvm_vcpu_run(struct kvm_vcpu *vcpu) __vgic_restore_state(vcpu); - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - */ - __sysreg32_restore_state(vcpu); sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); @@ -354,7 +349,6 @@ int kvm_vcpu_run(struct kvm_vcpu *vcpu) goto again; sysreg_save_guest_state_vhe(guest_ctxt); - __sysreg32_save_state(vcpu); __vgic_save_state(vcpu); __deactivate_traps(vcpu); diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 354ca02..6ff1ce5 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -25,8 +25,12 @@ /* * Non-VHE: Both host and guest must save everything. * - * VHE: Host must save tpidr*_el0, actlr_el1, mdscr_el1, sp_el0, - * and guest must save everything. + * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate, + * which are handled as part of the el2 return state) on every switch. + * tpidr_el0, tpidrro_el0, and actlr_el1 only need to be switched when going + * to host userspace or a different VCPU. EL1 registers only need to be + * switched when potentially going to run a different VCPU. The latter two + * classes are handled as part of kvm_arch_vcpu_load and kvm_arch_vcpu_put. */ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt) @@ -85,14 +89,11 @@ void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_save_common_state(ctxt); - __sysreg_save_user_state(ctxt); } void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt) { - __sysreg_save_el1_state(ctxt); __sysreg_save_common_state(ctxt); - __sysreg_save_user_state(ctxt); __sysreg_save_el2_return_state(ctxt); } @@ -153,14 +154,11 @@ void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_common_state(ctxt); - __sysreg_restore_user_state(ctxt); } void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt) { - __sysreg_restore_el1_state(ctxt); __sysreg_restore_common_state(ctxt); - __sysreg_restore_user_state(ctxt); __sysreg_restore_el2_return_state(ctxt); } @@ -224,6 +222,26 @@ void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) */ void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context; + struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; + + if (!has_vhe()) + return; + + __sysreg_save_user_state(host_ctxt); + + + /* + * Load guest EL1 and user state + * + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_user_state(guest_ctxt); + __sysreg_restore_el1_state(guest_ctxt); + + vcpu->arch.sysregs_loaded_on_cpu = true; } /** @@ -250,4 +268,16 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); vcpu->arch.guest_vfp_loaded = 0; } + + if (!has_vhe()) + return; + + __sysreg_save_el1_state(guest_ctxt); + __sysreg_save_user_state(guest_ctxt); + __sysreg32_save_state(vcpu); + + /* Restore host user state */ + __sysreg_restore_user_state(host_ctxt); + + vcpu->arch.sysregs_loaded_on_cpu = false; }