From patchwork Tue Feb 27 11:34:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10244875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 382DB60208 for ; Tue, 27 Feb 2018 11:35:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CCC128846 for ; Tue, 27 Feb 2018 11:35:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F5162884A; Tue, 27 Feb 2018 11:35:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67F6228846 for ; Tue, 27 Feb 2018 11:35:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753154AbeB0Lfj (ORCPT ); Tue, 27 Feb 2018 06:35:39 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:39475 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753081AbeB0Lf0 (ORCPT ); Tue, 27 Feb 2018 06:35:26 -0500 Received: by mail-wm0-f66.google.com with SMTP id 191so23158114wmm.4 for ; Tue, 27 Feb 2018 03:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=christofferdall-dk.20150623.gappssmtp.com; s=20150623; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=tFbBejsasEvKyXNRcrfWxyk/9GapDmJ3lzhwZR91iUA=; b=hmt3Kx0wfYplilEaYNoRdhEP2sU8t9foJmbxECGuoyYxRwbJkHPfWCwvYN9jHDHvH0 ZKI8zh7CszZF6J2Ba9GaSDJakRwHMKLWpmuKNfnKRiJjRiBd3kKY25TqLnPavEXLJxUU +/1kjEJeS6pq78MvEhiPZwrKeYTfKSP9cfrZ/yXoaOE0VwVrAidgYCik04gs6YGRwduy gPCEszfa8jcL1KrEa6hDtP0TDol0xaBE7R1jp5S6P02yZLz+vLGU8gRAqZp2rSMGjdax 5kfKbD5gdA83cZ6YDRE8maSzya9uv+aoitkE+zTa8gmgU9fg0EbXibL3eMjN11bmvNvZ qfeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=tFbBejsasEvKyXNRcrfWxyk/9GapDmJ3lzhwZR91iUA=; b=ULwHbfFrhSG4NcSsSJKPKKtRgTsmygdZ61A4DQUy7dH39O5HsWRFOhvKP1Mt0WvZcV fK6K1sGYzck/L0Qw5zDRJmEyl5GPjQmKu9EaIum75YqyYA16dxHbU8T3k7XYJBJAHA5a 177KeE1rLOPF8LIP7nwz5UzMkqW46bfiTJ2xGGsUmKmT7uyY7ZLW0+oJecdxjObWbMMm u8ViOxIGzZFI+5FLh/uGhYwY9+BhbHp5rYthGa+an4hpZB5gp9ex/VaFWHaArRHiItBA mcSUqj2NTtehyaSbGjHXZIYVwg0mawXqvxM83hRwhxaNKPsIOZlPmh+za/nyk5BksDss 4O1w== X-Gm-Message-State: APf1xPAuc1v/10FSjKhgrQTQSfoAwit7caEsdpI11vYfs1AdMY6BqsdE zC3uSqDm7K+e/Tscw0CO1sIKmg== X-Google-Smtp-Source: AH8x227F9hEtpFpjHVB4Jxa+6xSqtpo1HAT6yXNcSkSdw4ZDRj1YnIhtaAB6/8ru9KO5fogkPHpHHg== X-Received: by 10.80.184.58 with SMTP id j55mr18737181ede.45.1519731325345; Tue, 27 Feb 2018 03:35:25 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id m1sm9176786ede.39.2018.02.27.03.35.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Feb 2018 03:35:24 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Andrew Jones , Shih-Wei Li , Dave Martin , Julien Grall , Tomasz Nowicki , Yury Norov Subject: [PATCH v5 38/40] KVM: arm/arm64: Handle VGICv3 save/restore from the main VGIC code on VHE Date: Tue, 27 Feb 2018 12:34:27 +0100 Message-Id: <20180227113429.637-39-cdall@kernel.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180227113429.637-1-cdall@kernel.org> References: <20180227113429.637-1-cdall@kernel.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Just like we can program the GICv2 hypervisor control interface directly from the core vgic code, we can do the same for the GICv3 hypervisor control interface on VHE systems. We do this by simply calling the save/restore functions when we have VHE and we can then get rid of the save/restore function calls from the VHE world switch function. One caveat is that we now write GICv3 system register state before the potential early exit path in the run loop, and because we sync back state in the early exit path, we have to ensure that we read a consistent GIC state from the sync path, even though we have never actually run the guest with the newly written GIC state. We solve this by inserting an ISB in the early exit path. Signed-off-by: Christoffer Dall --- Notes: Changes since v4: - Added can_access_vgic_from_kernel() primitive to make the save/restore flow from the main vgic code slightly easier to understand. - Also added a __hyp prefix to the non-VHE world-switch save/restore functions for GICv3 to avoid confusion with the save/restore functions in the main VGIC code. Changes since v2: - Added ISB in the early exit path in the run loop as explained in the commit message. arch/arm64/kvm/hyp/switch.c | 13 ++++++------- virt/kvm/arm/arm.c | 1 + virt/kvm/arm/vgic/vgic.c | 21 +++++++++++++++++++-- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 31badf6e91e8..86abbee40d3f 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -192,13 +192,15 @@ static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) write_sysreg(0, vttbr_el2); } -static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu) +/* Save VGICv3 state on non-VHE systems */ +static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_save_state(vcpu); } -static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu) +/* Restore VGICv3 state on non_VEH systems */ +static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_restore_state(vcpu); @@ -400,8 +402,6 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __activate_traps(vcpu); __activate_vm(vcpu->kvm); - __vgic_restore_state(vcpu); - sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); @@ -415,7 +415,6 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) fp_enabled = fpsimd_enabled_vhe(); sysreg_save_guest_state_vhe(guest_ctxt); - __vgic_save_state(vcpu); __deactivate_traps(vcpu); @@ -451,7 +450,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __activate_traps(vcpu); __activate_vm(kern_hyp_va(vcpu->kvm)); - __vgic_restore_state(vcpu); + __hyp_vgic_restore_state(vcpu); __timer_enable_traps(vcpu); /* @@ -484,7 +483,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); __timer_disable_traps(vcpu); - __vgic_save_state(vcpu); + __hyp_vgic_save_state(vcpu); __deactivate_traps(vcpu); __deactivate_vm(vcpu); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 09dbee56ed8f..dba629c5f8ac 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -717,6 +717,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; + isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); if (static_branch_unlikely(&userspace_irqchip_in_use)) kvm_timer_sync_hwstate(vcpu); diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 12e2a28f437e..eaab4a616ecf 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "vgic.h" @@ -749,10 +750,22 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) vgic_clear_lr(vcpu, count); } +static inline bool can_access_vgic_from_kernel(void) +{ + /* + * GICv2 can always be accessed from the kernel because it is + * memory-mapped, and VHE systems can access GICv3 EL2 system + * registers. + */ + return !static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif) || has_vhe(); +} + static inline void vgic_save_state(struct kvm_vcpu *vcpu) { if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) vgic_v2_save_state(vcpu); + else + __vgic_v3_save_state(vcpu); } /* Sync back the hardware VGIC state into our emulation after a guest's run. */ @@ -760,7 +773,8 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; - vgic_save_state(vcpu); + if (can_access_vgic_from_kernel()) + vgic_save_state(vcpu); WARN_ON(vgic_v4_sync_hwstate(vcpu)); @@ -777,6 +791,8 @@ static inline void vgic_restore_state(struct kvm_vcpu *vcpu) { if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) vgic_v2_restore_state(vcpu); + else + __vgic_v3_restore_state(vcpu); } /* Flush our emulation state into the GIC hardware before entering the guest. */ @@ -803,7 +819,8 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock); out: - vgic_restore_state(vcpu); + if (can_access_vgic_from_kernel()) + vgic_restore_state(vcpu); } void kvm_vgic_load(struct kvm_vcpu *vcpu)