From patchwork Fri May 3 12:43:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC95114C0 for ; Fri, 3 May 2019 12:44:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BCC52285B8 for ; Fri, 3 May 2019 12:44:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AAAC9285BA; Fri, 3 May 2019 12:44:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 107E9285B8 for ; Fri, 3 May 2019 12:44:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727943AbfECMon (ORCPT ); Fri, 3 May 2019 08:44:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59972 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727032AbfECMon (ORCPT ); Fri, 3 May 2019 08:44:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE9C4374; Fri, 3 May 2019 05:44:42 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 94D073F220; Fri, 3 May 2019 05:44:39 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [GIT PULL] KVM/arm updates for 5.2 Date: Fri, 3 May 2019 13:43:31 +0100 Message-Id: <20190503124427.190206-1-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Paolo, Radim, This is the 5.2 pull request for KVM/arm. Pretty substential update this time, as we finally have SVE support in guests, which makes the bulk of the changes. Additionnaly, we have the ARMv8.3 Pointer Authentication support for guests, and some better support for perf modifiers. Please pull, M. The following changes since commit 8c2ffd9174779014c3fe1f96d9dc3641d9175f00: Linux 5.1-rc2 (2019-03-24 14:02:26 -0700) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git tags/kvmarm-for-v5.2 for you to fetch changes up to 9eecfc22e0bfc7a4c8ca007f083f0ae492d6e891: KVM: arm64: Fix ptrauth ID register masking logic (2019-05-01 17:21:51 +0100) ---------------------------------------------------------------- KVM/arm updates for 5.2 - guest SVE support - guest Pointer Authentication support - Better discrimination of perf counters between host and guests ---------------------------------------------------------------- Amit Daniel Kachhap (3): KVM: arm64: Add a vcpu flag to control ptrauth for guest KVM: arm64: Add userspace flag to enable pointer authentication KVM: arm64: Add capability to advertise ptrauth for guest Andrew Murray (9): arm64: arm_pmu: Remove unnecessary isb instruction arm64: KVM: Encapsulate kvm_cpu_context in kvm_host_data arm64: KVM: Add accessors to track guest/host only counters arm64: arm_pmu: Add !VHE support for exclude_host/exclude_guest attributes arm64: KVM: Enable !VHE support for :G/:H perf event modifiers arm64: KVM: Enable VHE support for :G/:H perf event modifiers arm64: KVM: Avoid isb's by using direct pmxevtyper sysreg arm64: docs: Document perf event attributes arm64: KVM: Fix perf cycle counter support for VHE Dave Martin (41): KVM: Documentation: Document arm64 core registers in detail arm64: fpsimd: Always set TIF_FOREIGN_FPSTATE on task state flush KVM: arm64: Delete orphaned declaration for __fpsimd_enabled() KVM: arm64: Refactor kvm_arm_num_regs() for easier maintenance KVM: arm64: Add missing #includes to kvm_host.h arm64/sve: Clarify role of the VQ map maintenance functions arm64/sve: Check SVE virtualisability arm64/sve: Enable SVE state tracking for non-task contexts KVM: arm64: Add a vcpu flag to control SVE visibility for the guest KVM: arm64: Propagate vcpu into read_id_reg() KVM: arm64: Support runtime sysreg visibility filtering KVM: arm64/sve: System register context switch and access support KVM: arm64/sve: Context switch the SVE registers KVM: Allow 2048-bit register access via ioctl interface KVM: arm64: Add missing #include of in guest.c KVM: arm64: Factor out core register ID enumeration KVM: arm64: Reject ioctl access to FPSIMD V-regs on SVE vcpus KVM: arm64/sve: Add SVE support to register access ioctl interface KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST arm64/sve: In-kernel vector length availability query interface KVM: arm/arm64: Add hook for arch-specific KVM initialisation KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl KVM: arm64/sve: Add pseudo-register for the guest's vector lengths KVM: arm64/sve: Allow userspace to enable SVE for vcpus KVM: arm64: Add a capability to advertise SVE support KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG KVM: arm64/sve: Document KVM API extensions for SVE arm64/sve: Clarify vq map semantics KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE KVM: arm: Make vcpu finalization stubs into inline functions KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs KVM: arm64/sve: Clean up UAPI register ID definitions KVM: arm64/sve: Miscellaneous tidyups in guest.c KVM: arm64/sve: Make register ioctl access errors more consistent KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region() KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing KVM: arm64/sve: Explain validity checks in set_sve_vls() KVM: arm/arm64: Clean up vcpu finalization function parameter naming KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs Kristina Martsenko (1): KVM: arm64: Fix ptrauth ID register masking logic Marc Zyngier (1): arm64: KVM: Fix system register enumeration Mark Rutland (1): KVM: arm/arm64: Context-switch ptrauth registers Documentation/arm64/perf.txt | 85 +++++ Documentation/arm64/pointer-authentication.txt | 22 +- Documentation/virtual/kvm/api.txt | 178 +++++++++++ arch/arm/include/asm/kvm_emulate.h | 2 + arch/arm/include/asm/kvm_host.h | 26 +- arch/arm64/Kconfig | 6 +- arch/arm64/include/asm/fpsimd.h | 29 +- arch/arm64/include/asm/kvm_asm.h | 3 +- arch/arm64/include/asm/kvm_emulate.h | 16 + arch/arm64/include/asm/kvm_host.h | 101 +++++- arch/arm64/include/asm/kvm_hyp.h | 1 - arch/arm64/include/asm/kvm_ptrauth.h | 111 +++++++ arch/arm64/include/asm/sysreg.h | 3 + arch/arm64/include/uapi/asm/kvm.h | 43 +++ arch/arm64/kernel/asm-offsets.c | 7 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/fpsimd.c | 179 +++++++---- arch/arm64/kernel/perf_event.c | 50 ++- arch/arm64/kernel/signal.c | 5 - arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/fpsimd.c | 17 +- arch/arm64/kvm/guest.c | 415 +++++++++++++++++++++++-- arch/arm64/kvm/handle_exit.c | 36 ++- arch/arm64/kvm/hyp/entry.S | 15 + arch/arm64/kvm/hyp/switch.c | 80 ++++- arch/arm64/kvm/pmu.c | 239 ++++++++++++++ arch/arm64/kvm/reset.c | 167 +++++++++- arch/arm64/kvm/sys_regs.c | 183 +++++++++-- arch/arm64/kvm/sys_regs.h | 25 ++ include/uapi/linux/kvm.h | 7 + virt/kvm/arm/arm.c | 40 ++- 31 files changed, 1914 insertions(+), 181 deletions(-) create mode 100644 Documentation/arm64/perf.txt create mode 100644 arch/arm64/include/asm/kvm_ptrauth.h create mode 100644 arch/arm64/kvm/pmu.c From patchwork Fri May 3 12:43:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928529 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 142C91398 for ; Fri, 3 May 2019 12:44:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 044D9285B8 for ; Fri, 3 May 2019 12:44:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECD31285BD; Fri, 3 May 2019 12:44:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55997285B8 for ; Fri, 3 May 2019 12:44:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727952AbfECMou (ORCPT ); Fri, 3 May 2019 08:44:50 -0400 Received: from foss.arm.com ([217.140.101.70]:60006 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727032AbfECMou (ORCPT ); Fri, 3 May 2019 08:44:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC1D0374; Fri, 3 May 2019 05:44:49 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 95A253F220; Fri, 3 May 2019 05:44:46 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 02/56] arm64: fpsimd: Always set TIF_FOREIGN_FPSTATE on task state flush Date: Fri, 3 May 2019 13:43:33 +0100 Message-Id: <20190503124427.190206-3-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch updates fpsimd_flush_task_state() to mirror the new semantics of fpsimd_flush_cpu_state() introduced by commit d8ad71fa38a9 ("arm64: fpsimd: Fix TIF_FOREIGN_FPSTATE after invalidating cpu regs"). Both functions now implicitly set TIF_FOREIGN_FPSTATE to indicate that the task's FPSIMD state is not loaded into the cpu. As a side-effect, fpsimd_flush_task_state() now sets TIF_FOREIGN_FPSTATE even for non-running tasks. In the case of non-running tasks this is not useful but also harmless, because the flag is live only while the corresponding task is running. This function is not called from fast paths, so special-casing this for the task == current case is not really worth it. Compiler barriers previously present in restore_sve_fpsimd_context() are pulled into fpsimd_flush_task_state() so that it can be safely called with preemption enabled if necessary. Explicit calls to set TIF_FOREIGN_FPSTATE that accompany fpsimd_flush_task_state() calls and are now redundant are removed as appropriate. fpsimd_flush_task_state() is used to get exclusive access to the representation of the task's state via task_struct, for the purpose of replacing the state. Thus, the call to this function should happen before manipulating fpsimd_state or sve_state etc. in task_struct. Anomalous cases are reordered appropriately in order to make the code more consistent, although there should be no functional difference since these cases are protected by local_bh_disable() anyway. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 25 +++++++++++++++++++------ arch/arm64/kernel/signal.c | 5 ----- 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 5ebe73b69961..62c37f0ac946 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -550,7 +550,6 @@ int sve_set_vector_length(struct task_struct *task, local_bh_disable(); fpsimd_save(); - set_thread_flag(TIF_FOREIGN_FPSTATE); } fpsimd_flush_task_state(task); @@ -816,12 +815,11 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs) local_bh_disable(); fpsimd_save(); - fpsimd_to_sve(current); /* Force ret_to_user to reload the registers: */ fpsimd_flush_task_state(current); - set_thread_flag(TIF_FOREIGN_FPSTATE); + fpsimd_to_sve(current); if (test_and_set_thread_flag(TIF_SVE)) WARN_ON(1); /* SVE access shouldn't have trapped */ @@ -894,9 +892,9 @@ void fpsimd_flush_thread(void) local_bh_disable(); + fpsimd_flush_task_state(current); memset(¤t->thread.uw.fpsimd_state, 0, sizeof(current->thread.uw.fpsimd_state)); - fpsimd_flush_task_state(current); if (system_supports_sve()) { clear_thread_flag(TIF_SVE); @@ -933,8 +931,6 @@ void fpsimd_flush_thread(void) current->thread.sve_vl_onexec = 0; } - set_thread_flag(TIF_FOREIGN_FPSTATE); - local_bh_enable(); } @@ -1043,12 +1039,29 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) /* * Invalidate live CPU copies of task t's FPSIMD state + * + * This function may be called with preemption enabled. The barrier() + * ensures that the assignment to fpsimd_cpu is visible to any + * preemption/softirq that could race with set_tsk_thread_flag(), so + * that TIF_FOREIGN_FPSTATE cannot be spuriously re-cleared. + * + * The final barrier ensures that TIF_FOREIGN_FPSTATE is seen set by any + * subsequent code. */ void fpsimd_flush_task_state(struct task_struct *t) { t->thread.fpsimd_cpu = NR_CPUS; + + barrier(); + set_tsk_thread_flag(t, TIF_FOREIGN_FPSTATE); + + barrier(); } +/* + * Invalidate any task's FPSIMD state that is present on this cpu. + * This function must be called with softirqs disabled. + */ void fpsimd_flush_cpu_state(void) { __this_cpu_write(fpsimd_last_state.st, NULL); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 867a7cea70e5..a9b0485df074 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -296,11 +296,6 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) */ fpsimd_flush_task_state(current); - barrier(); - /* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */ - - set_thread_flag(TIF_FOREIGN_FPSTATE); - barrier(); /* From now, fpsimd_thread_switch() won't touch thread.sve_state */ sve_alloc(current); From patchwork Fri May 3 12:43:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 782681398 for ; Fri, 3 May 2019 12:44:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6614B20453 for ; Fri, 3 May 2019 12:44:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 58B38204BE; Fri, 3 May 2019 12:44:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0375A20453 for ; Fri, 3 May 2019 12:44:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727958AbfECMoy (ORCPT ); Fri, 3 May 2019 08:44:54 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60030 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727955AbfECMox (ORCPT ); Fri, 3 May 2019 08:44:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51D3580D; Fri, 3 May 2019 05:44:53 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1B1493F220; Fri, 3 May 2019 05:44:49 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 03/56] KVM: arm64: Delete orphaned declaration for __fpsimd_enabled() Date: Fri, 3 May 2019 13:43:34 +0100 Message-Id: <20190503124427.190206-4-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin __fpsimd_enabled() no longer exists, but a dangling declaration has survived in kvm_hyp.h. This patch gets rid of it. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_hyp.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 4da765f2cca5..ef8b8394d3d1 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -149,7 +149,6 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); -bool __fpsimd_enabled(void); void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); From patchwork Fri May 3 12:43:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A52B1398 for ; Fri, 3 May 2019 12:45:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A96E204BE for ; Fri, 3 May 2019 12:45:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3EF3A266F3; Fri, 3 May 2019 12:45:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EE33204BE for ; Fri, 3 May 2019 12:44:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727960AbfECMo5 (ORCPT ); Fri, 3 May 2019 08:44:57 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60050 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727956AbfECMo5 (ORCPT ); Fri, 3 May 2019 08:44:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB8D1374; Fri, 3 May 2019 05:44:56 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 94B493F220; Fri, 3 May 2019 05:44:53 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 04/56] KVM: arm64: Refactor kvm_arm_num_regs() for easier maintenance Date: Fri, 3 May 2019 13:43:35 +0100 Message-Id: <20190503124427.190206-5-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin kvm_arm_num_regs() adds together various partial register counts in a freeform sum expression, which makes it harder than necessary to read diffs that add, modify or remove a single term in the sum (which is expected to the common case under maintenance). This patch refactors the code to add the term one per line, for maximum readability. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index dd436a50fce7..62514cba95ca 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -258,8 +258,14 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) */ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { - return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) - + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; + unsigned long res = 0; + + res += num_core_regs(); + res += kvm_arm_num_sys_reg_descs(vcpu); + res += kvm_arm_get_fw_num_regs(vcpu); + res += NUM_TIMER_REGS; + + return res; } /** From patchwork Fri May 3 12:43:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F8021390 for ; Fri, 3 May 2019 12:45:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C9E820453 for ; Fri, 3 May 2019 12:45:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7CA562621B; Fri, 3 May 2019 12:45:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22D8920453 for ; Fri, 3 May 2019 12:45:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727967AbfECMpB (ORCPT ); Fri, 3 May 2019 08:45:01 -0400 Received: from foss.arm.com ([217.140.101.70]:60078 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727946AbfECMpA (ORCPT ); Fri, 3 May 2019 08:45:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5305A15AD; Fri, 3 May 2019 05:45:00 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1A66B3F220; Fri, 3 May 2019 05:44:56 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 05/56] KVM: arm64: Add missing #includes to kvm_host.h Date: Fri, 3 May 2019 13:43:36 +0100 Message-Id: <20190503124427.190206-6-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin kvm_host.h uses some declarations from other headers that are currently included by accident, without an explicit #include. This patch adds a few #includes that are clearly missing. Although the header builds without them today, this should help to avoid future surprises. Signed-off-by: Dave Martin Acked-by: Mark Rutland Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a01fe087e022..6d10100ff870 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -22,9 +22,13 @@ #ifndef __ARM64_KVM_HOST_H__ #define __ARM64_KVM_HOST_H__ +#include #include +#include #include +#include #include +#include #include #include #include From patchwork Fri May 3 12:43:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 102231398 for ; Fri, 3 May 2019 12:45:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68BC020453 for ; Fri, 3 May 2019 12:45:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 51DB426D08; Fri, 3 May 2019 12:45:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF0BB204FD for ; Fri, 3 May 2019 12:45:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727970AbfECMpE (ORCPT ); Fri, 3 May 2019 08:45:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60102 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727959AbfECMpE (ORCPT ); Fri, 3 May 2019 08:45:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D73E2165C; Fri, 3 May 2019 05:45:03 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9F29A3F220; Fri, 3 May 2019 05:45:00 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 06/56] arm64/sve: Clarify role of the VQ map maintenance functions Date: Fri, 3 May 2019 13:43:37 +0100 Message-Id: <20190503124427.190206-7-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The roles of sve_init_vq_map(), sve_update_vq_map() and sve_verify_vq_map() are highly non-obvious to anyone who has not dug through cpufeatures.c in detail. Since the way these functions interact with each other is more important here than a full understanding of the cpufeatures code, this patch adds comments to make the functions' roles clearer. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Reviewed-by: Julien Grall Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 62c37f0ac946..f59ea677cd42 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -647,6 +647,10 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) } } +/* + * Initialise the set of known supported VQs for the boot CPU. + * This is called during kernel boot, before secondary CPUs are brought up. + */ void __init sve_init_vq_map(void) { sve_probe_vqs(sve_vq_map); @@ -655,6 +659,7 @@ void __init sve_init_vq_map(void) /* * If we haven't committed to the set of supported VQs yet, filter out * those not supported by the current CPU. + * This function is called during the bring-up of early secondary CPUs only. */ void sve_update_vq_map(void) { @@ -662,7 +667,10 @@ void sve_update_vq_map(void) bitmap_and(sve_vq_map, sve_vq_map, sve_secondary_vq_map, SVE_VQ_MAX); } -/* Check whether the current CPU supports all VQs in the committed set */ +/* + * Check whether the current CPU supports all VQs in the committed set. + * This function is called during the bring-up of late secondary CPUs only. + */ int sve_verify_vq_map(void) { int ret = 0; From patchwork Fri May 3 12:43:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7950F1398 for ; Fri, 3 May 2019 12:45:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F25B9204BE for ; Fri, 3 May 2019 12:45:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E6B4D2521E; Fri, 3 May 2019 12:45:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4101E204BE for ; Fri, 3 May 2019 12:45:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727974AbfECMpI (ORCPT ); Fri, 3 May 2019 08:45:08 -0400 Received: from foss.arm.com ([217.140.101.70]:60128 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727946AbfECMpH (ORCPT ); Fri, 3 May 2019 08:45:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6030780D; Fri, 3 May 2019 05:45:07 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 266D13F220; Fri, 3 May 2019 05:45:04 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 07/56] arm64/sve: Check SVE virtualisability Date: Fri, 3 May 2019 13:43:38 +0100 Message-Id: <20190503124427.190206-8-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Due to the way the effective SVE vector length is controlled and trapped at different exception levels, certain mismatches in the sets of vector lengths supported by different physical CPUs in the system may prevent straightforward virtualisation of SVE at parity with the host. This patch analyses the extent to which SVE can be virtualised safely without interfering with migration of vcpus between physical CPUs, and rejects late secondary CPUs that would erode the situation further. It is left up to KVM to decide what to do with this information. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/fpsimd.c | 86 +++++++++++++++++++++++++++------ 3 files changed, 73 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index dd1ad3950ef5..964adc9f312d 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -87,6 +87,7 @@ extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern u64 read_zcr_features(void); extern int __ro_after_init sve_max_vl; +extern int __ro_after_init sve_max_virtualisable_vl; #ifdef CONFIG_ARM64_SVE diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4061de10cea6..7f8cc51f0740 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1863,7 +1863,7 @@ static void verify_sve_features(void) unsigned int len = zcr & ZCR_ELx_LEN_MASK; if (len < safe_len || sve_verify_vq_map()) { - pr_crit("CPU%d: SVE: required vector length(s) missing\n", + pr_crit("CPU%d: SVE: vector length support mismatch\n", smp_processor_id()); cpu_die_early(); } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f59ea677cd42..b219796a4081 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -18,6 +18,7 @@ */ #include +#include #include #include #include @@ -48,6 +49,7 @@ #include #include #include +#include #define FPEXC_IOF (1 << 0) #define FPEXC_DZF (1 << 1) @@ -130,14 +132,18 @@ static int sve_default_vl = -1; /* Maximum supported vector length across all CPUs (initially poisoned) */ int __ro_after_init sve_max_vl = SVE_VL_MIN; +int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN; /* Set of available vector lengths, as vq_to_bit(vq): */ static __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +/* Set of vector lengths present on at least one cpu: */ +static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); static void __percpu *efi_sve_state; #else /* ! CONFIG_ARM64_SVE */ /* Dummy declaration for code that will be optimised out: */ extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +extern __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); extern void __percpu *efi_sve_state; #endif /* ! CONFIG_ARM64_SVE */ @@ -623,12 +629,6 @@ int sve_get_current_vl(void) return sve_prctl_status(0); } -/* - * Bitmap for temporary storage of the per-CPU set of supported vector lengths - * during secondary boot. - */ -static DECLARE_BITMAP(sve_secondary_vq_map, SVE_VQ_MAX); - static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) { unsigned int vq, vl; @@ -654,6 +654,7 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) void __init sve_init_vq_map(void) { sve_probe_vqs(sve_vq_map); + bitmap_copy(sve_vq_partial_map, sve_vq_map, SVE_VQ_MAX); } /* @@ -663,8 +664,11 @@ void __init sve_init_vq_map(void) */ void sve_update_vq_map(void) { - sve_probe_vqs(sve_secondary_vq_map); - bitmap_and(sve_vq_map, sve_vq_map, sve_secondary_vq_map, SVE_VQ_MAX); + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + + sve_probe_vqs(tmp_map); + bitmap_and(sve_vq_map, sve_vq_map, tmp_map, SVE_VQ_MAX); + bitmap_or(sve_vq_partial_map, sve_vq_partial_map, tmp_map, SVE_VQ_MAX); } /* @@ -673,18 +677,48 @@ void sve_update_vq_map(void) */ int sve_verify_vq_map(void) { - int ret = 0; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + unsigned long b; - sve_probe_vqs(sve_secondary_vq_map); - bitmap_andnot(sve_secondary_vq_map, sve_vq_map, sve_secondary_vq_map, - SVE_VQ_MAX); - if (!bitmap_empty(sve_secondary_vq_map, SVE_VQ_MAX)) { + sve_probe_vqs(tmp_map); + + bitmap_complement(tmp_map, tmp_map, SVE_VQ_MAX); + if (bitmap_intersects(tmp_map, sve_vq_map, SVE_VQ_MAX)) { pr_warn("SVE: cpu%d: Required vector length(s) missing\n", smp_processor_id()); - ret = -EINVAL; + return -EINVAL; } - return ret; + if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available()) + return 0; + + /* + * For KVM, it is necessary to ensure that this CPU doesn't + * support any vector length that guests may have probed as + * unsupported. + */ + + /* Recover the set of supported VQs: */ + bitmap_complement(tmp_map, tmp_map, SVE_VQ_MAX); + /* Find VQs supported that are not globally supported: */ + bitmap_andnot(tmp_map, tmp_map, sve_vq_map, SVE_VQ_MAX); + + /* Find the lowest such VQ, if any: */ + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + return 0; /* no mismatches */ + + /* + * Mismatches above sve_max_virtualisable_vl are fine, since + * no guest is allowed to configure ZCR_EL2.LEN to exceed this: + */ + if (sve_vl_from_vq(bit_to_vq(b)) <= sve_max_virtualisable_vl) { + pr_warn("SVE: cpu%d: Unsupported vector length(s) present\n", + smp_processor_id()); + return -EINVAL; + } + + return 0; } static void __init sve_efi_setup(void) @@ -751,6 +785,8 @@ u64 read_zcr_features(void) void __init sve_setup(void) { u64 zcr; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + unsigned long b; if (!system_supports_sve()) return; @@ -779,11 +815,31 @@ void __init sve_setup(void) */ sve_default_vl = find_supported_vector_length(64); + bitmap_andnot(tmp_map, sve_vq_partial_map, sve_vq_map, + SVE_VQ_MAX); + + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + /* No non-virtualisable VLs found */ + sve_max_virtualisable_vl = SVE_VQ_MAX; + else if (WARN_ON(b == SVE_VQ_MAX - 1)) + /* No virtualisable VLs? This is architecturally forbidden. */ + sve_max_virtualisable_vl = SVE_VQ_MIN; + else /* b + 1 < SVE_VQ_MAX */ + sve_max_virtualisable_vl = sve_vl_from_vq(bit_to_vq(b + 1)); + + if (sve_max_virtualisable_vl > sve_max_vl) + sve_max_virtualisable_vl = sve_max_vl; + pr_info("SVE: maximum available vector length %u bytes per vector\n", sve_max_vl); pr_info("SVE: default vector length %u bytes per vector\n", sve_default_vl); + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (sve_max_virtualisable_vl < sve_max_vl) + pr_warn("SVE: unvirtualisable vector lengths present\n"); + sve_efi_setup(); } From patchwork Fri May 3 12:43:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6C791390 for ; Fri, 3 May 2019 12:45:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 687DA20453 for ; Fri, 3 May 2019 12:45:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B97E2621B; Fri, 3 May 2019 12:45:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAA8920453 for ; Fri, 3 May 2019 12:45:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbfECMpL (ORCPT ); Fri, 3 May 2019 08:45:11 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60148 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727975AbfECMpL (ORCPT ); Fri, 3 May 2019 08:45:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD82D1682; Fri, 3 May 2019 05:45:10 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A3EEF3F220; Fri, 3 May 2019 05:45:07 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 08/56] arm64/sve: Enable SVE state tracking for non-task contexts Date: Fri, 3 May 2019 13:43:39 +0100 Message-Id: <20190503124427.190206-9-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The current FPSIMD/SVE context handling support for non-task (i.e., KVM vcpu) contexts does not take SVE into account. This means that only task contexts can safely use SVE at present. In preparation for enabling KVM guests to use SVE, it is necessary to keep track of SVE state for non-task contexts too. This patch adds the necessary support, removing assumptions from the context switch code about the location of the SVE context storage. When binding a vcpu context, its vector length is arbitrarily specified as SVE_VL_MIN for now. In any case, because TIF_SVE is presently cleared at vcpu context bind time, the specified vector length will not be used for anything yet. In later patches TIF_SVE will be set here as appropriate, and the appropriate maximum vector length for the vcpu will be passed when binding. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/fpsimd.h | 3 ++- arch/arm64/kernel/fpsimd.c | 20 +++++++++++++++----- arch/arm64/kvm/fpsimd.c | 5 ++++- 3 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 964adc9f312d..df7a14305222 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -56,7 +56,8 @@ extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_bind_task_to_cpu(void); -extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state); +extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, + void *sve_state, unsigned int sve_vl); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_flush_cpu_state(void); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index b219796a4081..8a93afa78600 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -121,6 +121,8 @@ */ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; + void *sve_state; + unsigned int sve_vl; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -241,14 +243,15 @@ static void task_fpsimd_load(void) */ void fpsimd_save(void) { - struct user_fpsimd_state *st = __this_cpu_read(fpsimd_last_state.st); + struct fpsimd_last_state_struct const *last = + this_cpu_ptr(&fpsimd_last_state); /* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */ WARN_ON(!in_softirq() && !irqs_disabled()); if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { if (system_supports_sve() && test_thread_flag(TIF_SVE)) { - if (WARN_ON(sve_get_vl() != current->thread.sve_vl)) { + if (WARN_ON(sve_get_vl() != last->sve_vl)) { /* * Can't save the user regs, so current would * re-enter user with corrupt state. @@ -258,9 +261,11 @@ void fpsimd_save(void) return; } - sve_save_state(sve_pffr(¤t->thread), &st->fpsr); + sve_save_state((char *)last->sve_state + + sve_ffr_offset(last->sve_vl), + &last->st->fpsr); } else - fpsimd_save_state(st); + fpsimd_save_state(last->st); } } @@ -1034,6 +1039,8 @@ void fpsimd_bind_task_to_cpu(void) this_cpu_ptr(&fpsimd_last_state); last->st = ¤t->thread.uw.fpsimd_state; + last->sve_state = current->thread.sve_state; + last->sve_vl = current->thread.sve_vl; current->thread.fpsimd_cpu = smp_processor_id(); if (system_supports_sve()) { @@ -1047,7 +1054,8 @@ void fpsimd_bind_task_to_cpu(void) } } -void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) +void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, + unsigned int sve_vl) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1055,6 +1063,8 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) WARN_ON(!in_softirq() && !irqs_disabled()); last->st = st; + last->sve_state = sve_state; + last->sve_vl = sve_vl; } /* diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index aac7808ce216..1cf4f0269471 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -85,7 +86,9 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { - fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.gp_regs.fp_regs); + fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.gp_regs.fp_regs, + NULL, SVE_VL_MIN); + clear_thread_flag(TIF_FOREIGN_FPSTATE); clear_thread_flag(TIF_SVE); } From patchwork Fri May 3 12:43:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C92F1390 for ; Fri, 3 May 2019 12:45:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6610720453 for ; Fri, 3 May 2019 12:45:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A3012521E; Fri, 3 May 2019 12:45:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57F8820453 for ; Fri, 3 May 2019 12:45:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727976AbfECMpP (ORCPT ); Fri, 3 May 2019 08:45:15 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60168 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMpP (ORCPT ); Fri, 3 May 2019 08:45:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 63E3C1684; Fri, 3 May 2019 05:45:14 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2CD753F220; Fri, 3 May 2019 05:45:11 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 09/56] KVM: arm64: Add a vcpu flag to control SVE visibility for the guest Date: Fri, 3 May 2019 13:43:40 +0100 Message-Id: <20190503124427.190206-10-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Since SVE will be enabled or disabled on a per-vcpu basis, a flag is needed in order to track which vcpus have it enabled. This patch adds a suitable flag and a helper for checking it. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 6d10100ff870..ad4f7f004498 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -328,6 +328,10 @@ struct kvm_vcpu_arch { #define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ +#define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ + +#define vcpu_has_sve(vcpu) (system_supports_sve() && \ + ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) From patchwork Fri May 3 12:43:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 198901390 for ; Fri, 3 May 2019 12:45:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B0E6204BE for ; Fri, 3 May 2019 12:45:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DC252521E; Fri, 3 May 2019 12:45:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CDC02204BE for ; Fri, 3 May 2019 12:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727971AbfECMpS (ORCPT ); Fri, 3 May 2019 08:45:18 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60190 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMpS (ORCPT ); Fri, 3 May 2019 08:45:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DEC38374; Fri, 3 May 2019 05:45:17 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A77853F220; Fri, 3 May 2019 05:45:14 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 10/56] KVM: arm64: Propagate vcpu into read_id_reg() Date: Fri, 3 May 2019 13:43:41 +0100 Message-Id: <20190503124427.190206-11-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Architecture features that are conditionally visible to the guest will require run-time checks in the ID register accessor functions. In particular, read_id_reg() will need to perform checks in order to generate the correct emulated value for certain ID register fields such as ID_AA64PFR0_EL1.SVE for example. This patch propagates vcpu into read_id_reg() so that future patches can add run-time checks on the guest configuration here. For now, there is no functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 539feecda5b8..a5d14b5e2ea4 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1044,7 +1044,8 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, } /* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz) +static u64 read_id_reg(const struct kvm_vcpu *vcpu, + struct sys_reg_desc const *r, bool raz) { u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); @@ -1078,7 +1079,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu, if (p->is_write) return write_to_read_only(vcpu, p, r); - p->regval = read_id_reg(r, raz); + p->regval = read_id_reg(vcpu, r, raz); return true; } @@ -1107,16 +1108,18 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg); * are stored, and for set_id_reg() we don't allow the effective value * to be changed. */ -static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, +static int __get_id_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); - const u64 val = read_id_reg(rd, raz); + const u64 val = read_id_reg(vcpu, rd, raz); return reg_to_user(uaddr, &val, id); } -static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, +static int __set_id_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); @@ -1128,7 +1131,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, return err; /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(rd, raz)) + if (val != read_id_reg(vcpu, rd, raz)) return -EINVAL; return 0; @@ -1137,25 +1140,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, false); + return __get_id_reg(vcpu, rd, uaddr, false); } static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, false); + return __set_id_reg(vcpu, rd, uaddr, false); } static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, true); + return __get_id_reg(vcpu, rd, uaddr, true); } static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, true); + return __set_id_reg(vcpu, rd, uaddr, true); } static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, From patchwork Fri May 3 12:43:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928553 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2ABD61390 for ; Fri, 3 May 2019 12:45:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 186B628334 for ; Fri, 3 May 2019 12:45:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C9D9285C7; Fri, 3 May 2019 12:45:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7BFEE28334 for ; Fri, 3 May 2019 12:45:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727979AbfECMpW (ORCPT ); Fri, 3 May 2019 08:45:22 -0400 Received: from foss.arm.com ([217.140.101.70]:60212 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727978AbfECMpW (ORCPT ); Fri, 3 May 2019 08:45:22 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 657BC15AD; Fri, 3 May 2019 05:45:21 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2E2BD3F220; Fri, 3 May 2019 05:45:18 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 11/56] KVM: arm64: Support runtime sysreg visibility filtering Date: Fri, 3 May 2019 13:43:42 +0100 Message-Id: <20190503124427.190206-12-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Some optional features of the Arm architecture add new system registers that are not present in the base architecture. Where these features are optional for the guest, the visibility of these registers may need to depend on some runtime configuration, such as a flag passed to KVM_ARM_VCPU_INIT. For example, ZCR_EL1 and ID_AA64ZFR0_EL1 need to be hidden if SVE is not enabled for the guest, even though these registers may be present in the hardware and visible to the host at EL2. Adding special-case checks all over the place for individual registers is going to get messy as the number of conditionally- visible registers grows. In order to help solve this problem, this patch adds a new sysreg method visibility() that can be used to hook in any needed runtime visibility checks. This method can currently return REG_HIDDEN_USER to inhibit enumeration and ioctl access to the register for userspace, and REG_HIDDEN_GUEST to inhibit runtime access by the guest using MSR/MRS. Wrappers are added to allow these flags to be conveniently queried. This approach allows a conditionally modified view of individual system registers such as the CPU ID registers, in addition to completely hiding register where appropriate. Signed-off-by: Dave Martin Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 24 +++++++++++++++++++++--- arch/arm64/kvm/sys_regs.h | 25 +++++++++++++++++++++++++ 2 files changed, 46 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a5d14b5e2ea4..c86a7b0d3e6b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1927,6 +1927,12 @@ static void perform_access(struct kvm_vcpu *vcpu, { trace_kvm_sys_access(*vcpu_pc(vcpu), params, r); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_guest(vcpu, r)) { + kvm_inject_undefined(vcpu); + return; + } + /* * Not having an accessor means that we have configured a trap * that we don't know how to handle. This certainly qualifies @@ -2438,6 +2444,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return get_invariant_sys_reg(reg->id, uaddr); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_user(vcpu, r)) + return -ENOENT; + if (r->get_user) return (r->get_user)(vcpu, r, reg, uaddr); @@ -2459,6 +2469,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return set_invariant_sys_reg(reg->id, uaddr); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_user(vcpu, r)) + return -ENOENT; + if (r->set_user) return (r->set_user)(vcpu, r, reg, uaddr); @@ -2515,7 +2529,8 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) return true; } -static int walk_one_sys_reg(const struct sys_reg_desc *rd, +static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, u64 __user **uind, unsigned int *total) { @@ -2526,6 +2541,9 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd, if (!(rd->reg || rd->get_user)) return 0; + if (sysreg_hidden_from_user(vcpu, rd)) + return 0; + if (!copy_reg_to_user(rd, uind)) return -EFAULT; @@ -2554,9 +2572,9 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind) int cmp = cmp_sys_reg(i1, i2); /* target-specific overrides generic entry. */ if (cmp <= 0) - err = walk_one_sys_reg(i1, &uind, &total); + err = walk_one_sys_reg(vcpu, i1, &uind, &total); else - err = walk_one_sys_reg(i2, &uind, &total); + err = walk_one_sys_reg(vcpu, i2, &uind, &total); if (err) return err; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 3b1bc7f01d0b..2be99508dcb9 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -64,8 +64,15 @@ struct sys_reg_desc { const struct kvm_one_reg *reg, void __user *uaddr); int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr); + + /* Return mask of REG_* runtime visibility overrides */ + unsigned int (*visibility)(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd); }; +#define REG_HIDDEN_USER (1 << 0) /* hidden from userspace ioctls */ +#define REG_HIDDEN_GUEST (1 << 1) /* hidden from guest */ + static inline void print_sys_reg_instr(const struct sys_reg_params *p) { /* Look, we even formatted it for you to paste into the table! */ @@ -102,6 +109,24 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r __vcpu_sys_reg(vcpu, r->reg) = r->val; } +static inline bool sysreg_hidden_from_guest(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + if (likely(!r->visibility)) + return false; + + return r->visibility(vcpu, r) & REG_HIDDEN_GUEST; +} + +static inline bool sysreg_hidden_from_user(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + if (likely(!r->visibility)) + return false; + + return r->visibility(vcpu, r) & REG_HIDDEN_USER; +} + static inline int cmp_sys_reg(const struct sys_reg_desc *i1, const struct sys_reg_desc *i2) { From patchwork Fri May 3 12:43:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A94A1398 for ; Fri, 3 May 2019 12:45:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2433D204FD for ; Fri, 3 May 2019 12:45:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1807028595; Fri, 3 May 2019 12:45:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56BB2204FD for ; Fri, 3 May 2019 12:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727983AbfECMpZ (ORCPT ); Fri, 3 May 2019 08:45:25 -0400 Received: from foss.arm.com ([217.140.101.70]:60232 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727930AbfECMpZ (ORCPT ); Fri, 3 May 2019 08:45:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BBF6165C; Fri, 3 May 2019 05:45:25 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A91E63F220; Fri, 3 May 2019 05:45:21 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 12/56] KVM: arm64/sve: System register context switch and access support Date: Fri, 3 May 2019 13:43:43 +0100 Message-Id: <20190503124427.190206-13-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch adds the necessary support for context switching ZCR_EL1 for each vcpu. ZCR_EL1 is trapped alongside the FPSIMD/SVE registers, so it makes sense for it to be handled as part of the guest FPSIMD/SVE context for context switch purposes instead of handling it as a general system register. This means that it can be switched in lazily at the appropriate time. No effort is made to track host context for this register, since SVE requires VHE: thus the hosts's value for this register lives permanently in ZCR_EL2 and does not alias the guest's value at any time. The Hyp switch and fpsimd context handling code is extended appropriately. Accessors are added in sys_regs.c to expose the SVE system registers and ID register fields. Because these need to be conditionally visible based on the guest configuration, they are implemented separately for now rather than by use of the generic system register helpers. This may be abstracted better later on when/if there are more features requiring this model. ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the guest, but for compatibility with non-SVE aware KVM implementations the register should not be enumerated at all for KVM_GET_REG_LIST in this case. For consistency we also reject ioctl access to the register. This ensures that a non-SVE-enabled guest looks the same to userspace, irrespective of whether the kernel KVM implementation supports SVE. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/sysreg.h | 3 ++ arch/arm64/kvm/fpsimd.c | 9 +++- arch/arm64/kvm/hyp/switch.c | 3 ++ arch/arm64/kvm/sys_regs.c | 83 +++++++++++++++++++++++++++++-- 5 files changed, 93 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ad4f7f004498..22cf484b561f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -121,6 +121,7 @@ enum vcpu_sysreg { SCTLR_EL1, /* System Control Register */ ACTLR_EL1, /* Auxiliary Control Register */ CPACR_EL1, /* Coprocessor Access Control */ + ZCR_EL1, /* SVE Control */ TTBR0_EL1, /* Translation Table Base Register 0 */ TTBR1_EL1, /* Translation Table Base Register 1 */ TCR_EL1, /* Translation Control Register */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 5b267dec6194..4d6262df79bb 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -454,6 +454,9 @@ #define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6) #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) +/* VHE encodings for architectural EL0/1 system registers */ +#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) + /* Common SCTLR_ELx flags. */ #define SCTLR_ELx_DSSBS (_BITUL(44)) #define SCTLR_ELx_ENIA (_BITUL(31)) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1cf4f0269471..7053bf402131 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -103,14 +103,21 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) { unsigned long flags; + bool host_has_sve = system_supports_sve(); + bool guest_has_sve = vcpu_has_sve(vcpu); local_irq_save(flags); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + u64 *guest_zcr = &vcpu->arch.ctxt.sys_regs[ZCR_EL1]; + /* Clean guest FP state to memory and invalidate cpu view */ fpsimd_save(); fpsimd_flush_cpu_state(); - } else if (system_supports_sve()) { + + if (guest_has_sve) + *guest_zcr = read_sysreg_s(SYS_ZCR_EL12); + } else if (host_has_sve) { /* * The FPSIMD/SVE state in the CPU has not been touched, and we * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 3563fe655cd5..9d46066276b9 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -351,6 +351,9 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + if (vcpu_has_sve(vcpu)) + write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); + /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c86a7b0d3e6b..09e9b0625911 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1051,10 +1051,7 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); u64 val = raz ? 0 : read_sanitised_ftr_reg(id); - if (id == SYS_ID_AA64PFR0_EL1) { - if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT)) - kvm_debug("SVE unsupported for guests, suppressing\n"); - + if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) { val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); } else if (id == SYS_ID_AA64ISAR1_EL1) { const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) | @@ -1101,6 +1098,81 @@ static int reg_from_user(u64 *val, const void __user *uaddr, u64 id); static int reg_to_user(void __user *uaddr, const u64 *val, u64 id); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); +/* Visibility overrides for SVE-specific control registers */ +static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sve(vcpu)) + return 0; + + return REG_HIDDEN_USER | REG_HIDDEN_GUEST; +} + +/* Visibility overrides for SVE-specific ID registers */ +static unsigned int sve_id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sve(vcpu)) + return 0; + + return REG_HIDDEN_USER; +} + +/* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */ +static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu) +{ + if (!vcpu_has_sve(vcpu)) + return 0; + + return read_sanitised_ftr_reg(SYS_ID_AA64ZFR0_EL1); +} + +static bool access_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, rd); + + p->regval = guest_id_aa64zfr0_el1(vcpu); + return true; +} + +static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + u64 val; + + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + val = guest_id_aa64zfr0_el1(vcpu); + return reg_to_user(uaddr, &val, reg->id); +} + +static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + const u64 id = sys_reg_to_index(rd); + int err; + u64 val; + + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + err = reg_from_user(&val, uaddr, id); + if (err) + return err; + + /* This is what we mean by invariant: you can't change it. */ + if (val != guest_id_aa64zfr0_el1(vcpu)) + return -EINVAL; + + return 0; +} + /* * cpufeature ID register user accessors * @@ -1346,7 +1418,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), - ID_UNALLOCATED(4,4), + { SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, .visibility = sve_id_visibility }, ID_UNALLOCATED(4,5), ID_UNALLOCATED(4,6), ID_UNALLOCATED(4,7), @@ -1383,6 +1455,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, + { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, From patchwork Fri May 3 12:43:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928559 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B0521390 for ; Fri, 3 May 2019 12:45:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A192204FD for ; Fri, 3 May 2019 12:45:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D8D828595; Fri, 3 May 2019 12:45:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C118204FD for ; Fri, 3 May 2019 12:45:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727986AbfECMp3 (ORCPT ); Fri, 3 May 2019 08:45:29 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60252 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMp3 (ORCPT ); Fri, 3 May 2019 08:45:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 968E8169E; Fri, 3 May 2019 05:45:28 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5F5AD3F220; Fri, 3 May 2019 05:45:25 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 13/56] KVM: arm64/sve: Context switch the SVE registers Date: Fri, 3 May 2019 13:43:44 +0100 Message-Id: <20190503124427.190206-14-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin In order to give each vcpu its own view of the SVE registers, this patch adds context storage via a new sve_state pointer in struct vcpu_arch. An additional member sve_max_vl is also added for each vcpu, to determine the maximum vector length visible to the guest and thus the value to be configured in ZCR_EL2.LEN while the vcpu is active. This also determines the layout and size of the storage in sve_state, which is read and written by the same backend functions that are used for context-switching the SVE state for host tasks. On SVE-enabled vcpus, SVE access traps are now handled by switching in the vcpu's SVE context and disabling the trap before returning to the guest. On other vcpus, the trap is not handled and an exit back to the host occurs, where the handle_sve() fallback path reflects an undefined instruction exception back to the guest, consistently with the behaviour of non-SVE-capable hardware (as was done unconditionally prior to this patch). No SVE handling is added on non-VHE-only paths, since VHE is an architectural and Kconfig prerequisite of SVE. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 6 +++ arch/arm64/kvm/fpsimd.c | 5 ++- arch/arm64/kvm/hyp/switch.c | 75 +++++++++++++++++++++++-------- 3 files changed, 66 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 22cf484b561f..4fabfd250de8 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -228,6 +228,8 @@ struct vcpu_reset_state { struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; + void *sve_state; + unsigned int sve_max_vl; /* HYP configuration */ u64 hcr_el2; @@ -323,6 +325,10 @@ struct kvm_vcpu_arch { bool sysregs_loaded_on_cpu; }; +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl))) + /* vcpu_arch flags field values: */ #define KVM_ARM64_DEBUG_DIRTY (1 << 0) #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 7053bf402131..6e3c9c8b2df9 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -87,10 +87,11 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.gp_regs.fp_regs, - NULL, SVE_VL_MIN); + vcpu->arch.sve_state, + vcpu->arch.sve_max_vl); clear_thread_flag(TIF_FOREIGN_FPSTATE); - clear_thread_flag(TIF_SVE); + update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); } } diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 9d46066276b9..5444b9c6fb5c 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -100,7 +100,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; - if (!update_fp_enabled(vcpu)) { + if (update_fp_enabled(vcpu)) { + if (vcpu_has_sve(vcpu)) + val |= CPACR_EL1_ZEN; + } else { val &= ~CPACR_EL1_FPEN; __activate_traps_fpsimd32(vcpu); } @@ -317,16 +320,48 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) return true; } -static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) +/* Check for an FPSIMD/SVE trap and handle as appropriate */ +static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { - struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; + bool vhe, sve_guest, sve_host; + u8 hsr_ec; - if (has_vhe()) - write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, - cpacr_el1); - else + if (!system_supports_fpsimd()) + return false; + + if (system_supports_sve()) { + sve_guest = vcpu_has_sve(vcpu); + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + vhe = true; + } else { + sve_guest = false; + sve_host = false; + vhe = has_vhe(); + } + + hsr_ec = kvm_vcpu_trap_get_class(vcpu); + if (hsr_ec != ESR_ELx_EC_FP_ASIMD && + hsr_ec != ESR_ELx_EC_SVE) + return false; + + /* Don't handle SVE traps for non-SVE vcpus here: */ + if (!sve_guest) + if (hsr_ec != ESR_ELx_EC_FP_ASIMD) + return false; + + /* Valid trap. Switch the context: */ + + if (vhe) { + u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; + + if (sve_guest) + reg |= CPACR_EL1_ZEN; + + write_sysreg(reg, cpacr_el1); + } else { write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, cptr_el2); + } isb(); @@ -335,24 +370,28 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) * In the SVE case, VHE is assumed: it is enforced by * Kconfig and kvm_arch_init(). */ - if (system_supports_sve() && - (vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE)) { + if (sve_host) { struct thread_struct *thread = container_of( - host_fpsimd, + vcpu->arch.host_fpsimd_state, struct thread_struct, uw.fpsimd_state); - sve_save_state(sve_pffr(thread), &host_fpsimd->fpsr); + sve_save_state(sve_pffr(thread), + &vcpu->arch.host_fpsimd_state->fpsr); } else { - __fpsimd_save_state(host_fpsimd); + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); } vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; } - __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); - - if (vcpu_has_sve(vcpu)) + if (sve_guest) { + sve_load_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, + sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); + } else { + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + } /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) @@ -388,10 +427,10 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) * and restore the guest context lazily. * If FP/SIMD is not implemented, handle the trap and inject an * undefined instruction exception to the guest. + * Similarly for trapped SVE accesses. */ - if (system_supports_fpsimd() && - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_FP_ASIMD) - return __hyp_switch_fpsimd(vcpu); + if (__hyp_handle_fpsimd(vcpu)) + return true; if (!__populate_fault_info(vcpu)) return true; From patchwork Fri May 3 12:43:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 711911390 for ; Fri, 3 May 2019 12:45:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5EB87204FD for ; Fri, 3 May 2019 12:45:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 518F328595; Fri, 3 May 2019 12:45:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAD0D204FD for ; Fri, 3 May 2019 12:45:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727991AbfECMpc (ORCPT ); Fri, 3 May 2019 08:45:32 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60278 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMpc (ORCPT ); Fri, 3 May 2019 08:45:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CE40165C; Fri, 3 May 2019 05:45:32 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DA1073F220; Fri, 3 May 2019 05:45:28 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 14/56] KVM: Allow 2048-bit register access via ioctl interface Date: Fri, 3 May 2019 13:43:45 +0100 Message-Id: <20190503124427.190206-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The Arm SVE architecture defines registers that are up to 2048 bits in size (with some possibility of further future expansion). In order to avoid the need for an excessively large number of ioctls when saving and restoring a vcpu's registers, this patch adds a #define to make support for individual 2048-bit registers through the KVM_{GET,SET}_ONE_REG ioctl interface official. This will allow each SVE register to be accessed in a single call. There are sufficient spare bits in the register id size field for this change, so there is no ABI impact, providing that KVM_GET_REG_LIST does not enumerate any 2048-bit register unless userspace explicitly opts in to the relevant architecture-specific features. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- include/uapi/linux/kvm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 6d4ea4b6c922..dc77a5a3648d 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1145,6 +1145,7 @@ struct kvm_dirty_tlb { #define KVM_REG_SIZE_U256 0x0050000000000000ULL #define KVM_REG_SIZE_U512 0x0060000000000000ULL #define KVM_REG_SIZE_U1024 0x0070000000000000ULL +#define KVM_REG_SIZE_U2048 0x0080000000000000ULL struct kvm_reg_list { __u64 n; /* number of regs */ From patchwork Fri May 3 12:43:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E8241398 for ; Fri, 3 May 2019 12:45:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BE2F204FD for ; Fri, 3 May 2019 12:45:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8035B28595; Fri, 3 May 2019 12:45:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23BB9204FD for ; Fri, 3 May 2019 12:45:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727978AbfECMpg (ORCPT ); Fri, 3 May 2019 08:45:36 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60300 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727987AbfECMpg (ORCPT ); Fri, 3 May 2019 08:45:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97DE580D; Fri, 3 May 2019 05:45:35 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6052B3F220; Fri, 3 May 2019 05:45:32 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 15/56] KVM: arm64: Add missing #include of in guest.c Date: Fri, 3 May 2019 13:43:46 +0100 Message-Id: <20190503124427.190206-16-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin arch/arm64/kvm/guest.c uses the string functions, but the corresponding header is not included. We seem to get away with this for now, but for completeness this patch adds the #include, in preparation for adding yet more memset() calls. Signed-off-by: Dave Martin Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 62514cba95ca..3e38eb28b03c 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include From patchwork Fri May 3 12:43:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928567 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78FE41390 for ; Fri, 3 May 2019 12:45:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67BD7204FD for ; Fri, 3 May 2019 12:45:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5901628595; Fri, 3 May 2019 12:45:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E47E3204FD for ; Fri, 3 May 2019 12:45:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727987AbfECMpj (ORCPT ); Fri, 3 May 2019 08:45:39 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60318 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727710AbfECMpj (ORCPT ); Fri, 3 May 2019 08:45:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1ED44165C; Fri, 3 May 2019 05:45:39 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DB6713F220; Fri, 3 May 2019 05:45:35 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 16/56] KVM: arm64: Factor out core register ID enumeration Date: Fri, 3 May 2019 13:43:47 +0100 Message-Id: <20190503124427.190206-17-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin In preparation for adding logic to filter out some KVM_REG_ARM_CORE registers from the KVM_GET_REG_LIST output, this patch factors out the core register enumeration into a separate function and rebuilds num_core_regs() on top of it. This may be a little more expensive (depending on how good a job the compiler does of specialising the code), but KVM_GET_REG_LIST is not a hot path. This will make it easier to consolidate ID filtering code in one place. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 33 +++++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3e38eb28b03c..a391a61b1033 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -194,9 +195,28 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return -EINVAL; } +static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) +{ + unsigned int i; + int n = 0; + const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; + + for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { + if (uindices) { + if (put_user(core_reg | i, uindices)) + return -EFAULT; + uindices++; + } + + n++; + } + + return n; +} + static unsigned long num_core_regs(void) { - return sizeof(struct kvm_regs) / sizeof(__u32); + return kvm_arm_copy_core_reg_indices(NULL); } /** @@ -276,15 +296,12 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) */ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { - unsigned int i; - const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; int ret; - for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { - if (put_user(core_reg | i, uindices)) - return -EFAULT; - uindices++; - } + ret = kvm_arm_copy_core_reg_indices(uindices); + if (ret) + return ret; + uindices += ret; ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); if (ret) From patchwork Fri May 3 12:43:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D36231398 for ; Fri, 3 May 2019 12:45:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C1DFE204FD for ; Fri, 3 May 2019 12:45:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B5B4728595; Fri, 3 May 2019 12:45:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3BE98204FD for ; Fri, 3 May 2019 12:45:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727959AbfECMpn (ORCPT ); Fri, 3 May 2019 08:45:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60340 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727700AbfECMpm (ORCPT ); Fri, 3 May 2019 08:45:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9964B80D; Fri, 3 May 2019 05:45:42 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 624BD3F220; Fri, 3 May 2019 05:45:39 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 17/56] KVM: arm64: Reject ioctl access to FPSIMD V-regs on SVE vcpus Date: Fri, 3 May 2019 13:43:48 +0100 Message-Id: <20190503124427.190206-18-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin In order to avoid the pointless complexity of maintaining two ioctl register access views of the same data, this patch blocks ioctl access to the FPSIMD V-registers on vcpus that support SVE. This will make it more straightforward to add SVE register access support. Since SVE is an opt-in feature for userspace, this will not affect existing users. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 48 +++++++++++++++++++++++++++++++----------- 1 file changed, 36 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index a391a61b1033..756d0d614993 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -54,12 +54,19 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) return 0; } +static bool core_reg_offset_is_vreg(u64 off) +{ + return off >= KVM_REG_ARM_CORE_REG(fp_regs.vregs) && + off < KVM_REG_ARM_CORE_REG(fp_regs.fpsr); +} + static u64 core_reg_offset_from_id(u64 id) { return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } -static int validate_core_offset(const struct kvm_one_reg *reg) +static int validate_core_offset(const struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { u64 off = core_reg_offset_from_id(reg->id); int size; @@ -91,11 +98,19 @@ static int validate_core_offset(const struct kvm_one_reg *reg) return -EINVAL; } - if (KVM_REG_SIZE(reg->id) == size && - IS_ALIGNED(off, size / sizeof(__u32))) - return 0; + if (KVM_REG_SIZE(reg->id) != size || + !IS_ALIGNED(off, size / sizeof(__u32))) + return -EINVAL; - return -EINVAL; + /* + * The KVM_REG_ARM64_SVE regs must be used instead of + * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on + * SVE-enabled vcpus: + */ + if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + return -EINVAL; + + return 0; } static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) @@ -117,7 +132,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs) return -ENOENT; - if (validate_core_offset(reg)) + if (validate_core_offset(vcpu, reg)) return -EINVAL; if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id))) @@ -142,7 +157,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs) return -ENOENT; - if (validate_core_offset(reg)) + if (validate_core_offset(vcpu, reg)) return -EINVAL; if (KVM_REG_SIZE(reg->id) > sizeof(tmp)) @@ -195,13 +210,22 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return -EINVAL; } -static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) +static int copy_core_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) { unsigned int i; int n = 0; const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { + /* + * The KVM_REG_ARM64_SVE regs must be used instead of + * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on + * SVE-enabled vcpus: + */ + if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i)) + continue; + if (uindices) { if (put_user(core_reg | i, uindices)) return -EFAULT; @@ -214,9 +238,9 @@ static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) return n; } -static unsigned long num_core_regs(void) +static unsigned long num_core_regs(const struct kvm_vcpu *vcpu) { - return kvm_arm_copy_core_reg_indices(NULL); + return copy_core_reg_indices(vcpu, NULL); } /** @@ -281,7 +305,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { unsigned long res = 0; - res += num_core_regs(); + res += num_core_regs(vcpu); res += kvm_arm_num_sys_reg_descs(vcpu); res += kvm_arm_get_fw_num_regs(vcpu); res += NUM_TIMER_REGS; @@ -298,7 +322,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { int ret; - ret = kvm_arm_copy_core_reg_indices(uindices); + ret = copy_core_reg_indices(vcpu, uindices); if (ret) return ret; uindices += ret; From patchwork Fri May 3 12:43:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B2611398 for ; Fri, 3 May 2019 12:45:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28071204FD for ; Fri, 3 May 2019 12:45:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1BF7328595; Fri, 3 May 2019 12:45:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AAAD204FD for ; Fri, 3 May 2019 12:45:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727710AbfECMpr (ORCPT ); Fri, 3 May 2019 08:45:47 -0400 Received: from foss.arm.com ([217.140.101.70]:60368 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMpq (ORCPT ); Fri, 3 May 2019 08:45:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20E3E80D; Fri, 3 May 2019 05:45:46 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DD10A3F220; Fri, 3 May 2019 05:45:42 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 18/56] KVM: arm64/sve: Add SVE support to register access ioctl interface Date: Fri, 3 May 2019 13:43:49 +0100 Message-Id: <20190503124427.190206-19-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch adds the following registers for access via the KVM_{GET,SET}_ONE_REG interface: * KVM_REG_ARM64_SVE_ZREG(n, i) (n = 0..31) (in 2048-bit slices) * KVM_REG_ARM64_SVE_PREG(n, i) (n = 0..15) (in 256-bit slices) * KVM_REG_ARM64_SVE_FFR(i) (in 256-bit slices) In order to adapt gracefully to future architectural extensions, the registers are logically divided up into slices as noted above: the i parameter denotes the slice index. This allows us to reserve space in the ABI for future expansion of these registers. However, as of today the architecture does not permit registers to be larger than a single slice, so no code is needed in the kernel to expose additional slices, for now. The code can be extended later as needed to expose them up to a maximum of 32 slices (as carved out in the architecture itself) if they really exist someday. The registers are only visible for vcpus that have SVE enabled. They are not enumerated by KVM_GET_REG_LIST on vcpus that do not have SVE. Accesses to the FPSIMD registers via KVM_REG_ARM_CORE is not allowed for SVE-enabled vcpus: SVE-aware userspace can use the KVM_REG_ARM64_SVE_ZREG() interface instead to access the same register state. This avoids some complex and pointless emulation in the kernel to convert between the two views of these aliased registers. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 14 +++ arch/arm64/include/uapi/asm/kvm.h | 17 ++++ arch/arm64/kvm/guest.c | 139 +++++++++++++++++++++++++++--- 3 files changed, 158 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4fabfd250de8..205438a108f6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -329,6 +329,20 @@ struct kvm_vcpu_arch { #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ sve_ffr_offset((vcpu)->arch.sve_max_vl))) +#define vcpu_sve_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = sve_vq_from_vl((vcpu)->arch.sve_max_vl); \ + __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* vcpu_arch flags field values: */ #define KVM_ARM64_DEBUG_DIRTY (1 << 0) #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 97c3478ee6e7..ced760cc8478 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -226,6 +226,23 @@ struct kvm_vcpu_events { KVM_REG_ARM_FW | ((r) & 0xffff)) #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) +/* SVE registers */ +#define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) + +/* Z- and P-regs occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SVE_ZREG_BASE 0 +#define KVM_REG_ARM64_SVE_PREG_BASE 0x400 + +#define KVM_REG_ARM64_SVE_ZREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ + KVM_REG_ARM64_SVE_ZREG_BASE | \ + KVM_REG_SIZE_U2048 | \ + ((n) << 5) | (i)) +#define KVM_REG_ARM64_SVE_PREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ + KVM_REG_ARM64_SVE_PREG_BASE | \ + KVM_REG_SIZE_U256 | \ + ((n) << 5) | (i)) +#define KVM_REG_ARM64_SVE_FFR(i) KVM_REG_ARM64_SVE_PREG(16, i) + /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 756d0d614993..736d8cb8dad7 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -19,8 +19,11 @@ * along with this program. If not, see . */ +#include #include #include +#include +#include #include #include #include @@ -30,9 +33,12 @@ #include #include #include +#include #include #include #include +#include +#include #include "trace.h" @@ -200,6 +206,115 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return err; } +#define SVE_REG_SLICE_SHIFT 0 +#define SVE_REG_SLICE_BITS 5 +#define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) +#define SVE_REG_ID_BITS 5 + +#define SVE_REG_SLICE_MASK \ + GENMASK(SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS - 1, \ + SVE_REG_SLICE_SHIFT) +#define SVE_REG_ID_MASK \ + GENMASK(SVE_REG_ID_SHIFT + SVE_REG_ID_BITS - 1, SVE_REG_ID_SHIFT) + +#define SVE_NUM_SLICES (1 << SVE_REG_SLICE_BITS) + +#define KVM_SVE_ZREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0)) +#define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) + +/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ +struct sve_state_reg_region { + unsigned int koffset; /* offset into sve_state in kernel memory */ + unsigned int klen; /* length in kernel memory */ + unsigned int upad; /* extra trailing padding in user memory */ +}; + +/* Get sanitised bounds for user/kernel SVE register copy */ +static int sve_reg_to_region(struct sve_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for Z- registers */ + const u64 zreg_id_min = KVM_REG_ARM64_SVE_ZREG(0, 0); + const u64 zreg_id_max = KVM_REG_ARM64_SVE_ZREG(SVE_NUM_ZREGS - 1, + SVE_NUM_SLICES - 1); + + /* reg ID ranges for P- registers and FFR (which are contiguous) */ + const u64 preg_id_min = KVM_REG_ARM64_SVE_PREG(0, 0); + const u64 preg_id_max = KVM_REG_ARM64_SVE_FFR(SVE_NUM_SLICES - 1); + + unsigned int vq; + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maxmimum permitted length */ + + size_t sve_state_size; + + /* Only the first slice ever exists, for now: */ + if ((reg->id & SVE_REG_SLICE_MASK) != 0) + return -ENOENT; + + vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); + + reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) { + reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) - + SVE_SIG_REGS_OFFSET; + reqlen = KVM_SVE_ZREG_SIZE; + maxlen = SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id >= preg_id_min && reg->id <= preg_id_max) { + reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) - + SVE_SIG_REGS_OFFSET; + reqlen = KVM_SVE_PREG_SIZE; + maxlen = SVE_SIG_PREG_SIZE(vq); + } else { + return -ENOENT; + } + + sve_state_size = vcpu_sve_state_size(vcpu); + if (!sve_state_size) + return -EINVAL; + + region->koffset = array_index_nospec(reqoffset, sve_state_size); + region->klen = min(maxlen, reqlen); + region->upad = reqlen - region->klen; + + return 0; +} + +static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + struct sve_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + + if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) + return -ENOENT; + + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, + region.klen) || + clear_user(uptr + region.klen, region.upad)) + return -EFAULT; + + return 0; +} + +static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + struct sve_state_reg_region region; + const char __user *uptr = (const char __user *)reg->addr; + + if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) + return -ENOENT; + + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; +} + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; @@ -346,12 +461,12 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32) return -EINVAL; - /* Register group 16 means we want a core register. */ - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) - return get_core_reg(vcpu, reg); - - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_get_fw_reg(vcpu, reg); + switch (reg->id & KVM_REG_ARM_COPROC_MASK) { + case KVM_REG_ARM_CORE: return get_core_reg(vcpu, reg); + case KVM_REG_ARM_FW: return kvm_arm_get_fw_reg(vcpu, reg); + case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + default: break; /* fall through */ + } if (is_timer_reg(reg->id)) return get_timer_reg(vcpu, reg); @@ -365,12 +480,12 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32) return -EINVAL; - /* Register group 16 means we set a core register. */ - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) - return set_core_reg(vcpu, reg); - - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_set_fw_reg(vcpu, reg); + switch (reg->id & KVM_REG_ARM_COPROC_MASK) { + case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg); + case KVM_REG_ARM_FW: return kvm_arm_set_fw_reg(vcpu, reg); + case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + default: break; /* fall through */ + } if (is_timer_reg(reg->id)) return set_timer_reg(vcpu, reg); From patchwork Fri May 3 12:43:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928575 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36FF61390 for ; Fri, 3 May 2019 12:45:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2610C204FD for ; Fri, 3 May 2019 12:45:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 19FF328595; Fri, 3 May 2019 12:45:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D3EC204FD for ; Fri, 3 May 2019 12:45:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727996AbfECMpu (ORCPT ); Fri, 3 May 2019 08:45:50 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60388 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727737AbfECMpu (ORCPT ); Fri, 3 May 2019 08:45:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9BD6F1682; Fri, 3 May 2019 05:45:49 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6478D3F220; Fri, 3 May 2019 05:45:46 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 19/56] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST Date: Fri, 3 May 2019 13:43:50 +0100 Message-Id: <20190503124427.190206-20-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch includes the SVE register IDs in the list returned by KVM_GET_REG_LIST, as appropriate. On a non-SVE-enabled vcpu, no new IDs are added. On an SVE-enabled vcpu, IDs for the FPSIMD V-registers are removed from the list, since userspace is required to access the Z- registers instead in order to access the V-register content. For the variably-sized SVE registers, the appropriate set of slice IDs are enumerated, depending on the maximum vector length for the vcpu. As it currently stands, the SVE architecture never requires more than one slice to exist per register, so this patch adds no explicit support for enumerating multiple slices. The code can be extended straightforwardly to support this in the future, if needed. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 63 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 736d8cb8dad7..2aa80a59e2a2 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -222,6 +222,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define KVM_SVE_ZREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0)) #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) +/* + * number of register slices required to cover each whole SVE register on vcpu + * NOTE: If you are tempted to modify this, you must also to rework + * sve_reg_to_region() to match: + */ +#define vcpu_sve_slices(vcpu) 1 + /* Bounds of a single SVE register slice within vcpu->arch.sve_state */ struct sve_state_reg_region { unsigned int koffset; /* offset into sve_state in kernel memory */ @@ -411,6 +418,56 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; } +static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) +{ + /* Only the first slice ever exists, for now */ + const unsigned int slices = vcpu_sve_slices(vcpu); + + if (!vcpu_has_sve(vcpu)) + return 0; + + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); +} + +static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + /* Only the first slice ever exists, for now */ + const unsigned int slices = vcpu_sve_slices(vcpu); + u64 reg; + unsigned int i, n; + int num_regs = 0; + + if (!vcpu_has_sve(vcpu)) + return 0; + + for (i = 0; i < slices; i++) { + for (n = 0; n < SVE_NUM_ZREGS; n++) { + reg = KVM_REG_ARM64_SVE_ZREG(n, i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + for (n = 0; n < SVE_NUM_PREGS; n++) { + reg = KVM_REG_ARM64_SVE_PREG(n, i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + reg = KVM_REG_ARM64_SVE_FFR(i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + return num_regs; +} + /** * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG * @@ -421,6 +478,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) unsigned long res = 0; res += num_core_regs(vcpu); + res += num_sve_regs(vcpu); res += kvm_arm_num_sys_reg_descs(vcpu); res += kvm_arm_get_fw_num_regs(vcpu); res += NUM_TIMER_REGS; @@ -442,6 +500,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return ret; uindices += ret; + ret = copy_sve_reg_indices(vcpu, uindices); + if (ret) + return ret; + uindices += ret; + ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); if (ret) return ret; From patchwork Fri May 3 12:43:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B34E81398 for ; Fri, 3 May 2019 12:45:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A1695204FD for ; Fri, 3 May 2019 12:45:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 958AE28595; Fri, 3 May 2019 12:45:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08437204FD for ; Fri, 3 May 2019 12:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728004AbfECMpx (ORCPT ); Fri, 3 May 2019 08:45:53 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60408 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727997AbfECMpx (ORCPT ); Fri, 3 May 2019 08:45:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 229B8374; Fri, 3 May 2019 05:45:53 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF6DD3F220; Fri, 3 May 2019 05:45:49 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 20/56] arm64/sve: In-kernel vector length availability query interface Date: Fri, 3 May 2019 13:43:51 +0100 Message-Id: <20190503124427.190206-21-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin KVM will need to interrogate the set of SVE vector lengths available on the system. This patch exposes the relevant bits to the kernel, along with a sve_vq_available() helper to check whether a particular vector length is supported. __vq_to_bit() and __bit_to_vq() are not intended for use outside these functions: now that these are exposed outside fpsimd.c, they are prefixed with __ in order to provide an extra hint that they are not intended for general-purpose use. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/fpsimd.h | 29 +++++++++++++++++++++++++++ arch/arm64/kernel/fpsimd.c | 35 ++++++++------------------------- 2 files changed, 37 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index df7a14305222..ad6d2e41eb37 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -24,10 +24,13 @@ #ifndef __ASSEMBLY__ +#include #include +#include #include #include #include +#include #if defined(__KERNEL__) && defined(CONFIG_COMPAT) /* Masks for extracting the FPSR and FPCR from the FPSCR */ @@ -89,6 +92,32 @@ extern u64 read_zcr_features(void); extern int __ro_after_init sve_max_vl; extern int __ro_after_init sve_max_virtualisable_vl; +/* Set of available vector lengths, as vq_to_bit(vq): */ +extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); + +/* + * Helpers to translate bit indices in sve_vq_map to VQ values (and + * vice versa). This allows find_next_bit() to be used to find the + * _maximum_ VQ not exceeding a certain value. + */ +static inline unsigned int __vq_to_bit(unsigned int vq) +{ + return SVE_VQ_MAX - vq; +} + +static inline unsigned int __bit_to_vq(unsigned int bit) +{ + if (WARN_ON(bit >= SVE_VQ_MAX)) + bit = SVE_VQ_MAX - 1; + + return SVE_VQ_MAX - bit; +} + +/* Ensure vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX before calling this function */ +static inline bool sve_vq_available(unsigned int vq) +{ + return test_bit(__vq_to_bit(vq), sve_vq_map); +} #ifdef CONFIG_ARM64_SVE diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 8a93afa78600..577296bba730 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -136,7 +136,7 @@ static int sve_default_vl = -1; int __ro_after_init sve_max_vl = SVE_VL_MIN; int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN; /* Set of available vector lengths, as vq_to_bit(vq): */ -static __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +__ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); /* Set of vector lengths present on at least one cpu: */ static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); static void __percpu *efi_sve_state; @@ -269,25 +269,6 @@ void fpsimd_save(void) } } -/* - * Helpers to translate bit indices in sve_vq_map to VQ values (and - * vice versa). This allows find_next_bit() to be used to find the - * _maximum_ VQ not exceeding a certain value. - */ - -static unsigned int vq_to_bit(unsigned int vq) -{ - return SVE_VQ_MAX - vq; -} - -static unsigned int bit_to_vq(unsigned int bit) -{ - if (WARN_ON(bit >= SVE_VQ_MAX)) - bit = SVE_VQ_MAX - 1; - - return SVE_VQ_MAX - bit; -} - /* * All vector length selection from userspace comes through here. * We're on a slow path, so some sanity-checks are included. @@ -309,8 +290,8 @@ static unsigned int find_supported_vector_length(unsigned int vl) vl = max_vl; bit = find_next_bit(sve_vq_map, SVE_VQ_MAX, - vq_to_bit(sve_vq_from_vl(vl))); - return sve_vl_from_vq(bit_to_vq(bit)); + __vq_to_bit(sve_vq_from_vl(vl))); + return sve_vl_from_vq(__bit_to_vq(bit)); } #ifdef CONFIG_SYSCTL @@ -648,7 +629,7 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) write_sysreg_s(zcr | (vq - 1), SYS_ZCR_EL1); /* self-syncing */ vl = sve_get_vl(); vq = sve_vq_from_vl(vl); /* skip intervening lengths */ - set_bit(vq_to_bit(vq), map); + set_bit(__vq_to_bit(vq), map); } } @@ -717,7 +698,7 @@ int sve_verify_vq_map(void) * Mismatches above sve_max_virtualisable_vl are fine, since * no guest is allowed to configure ZCR_EL2.LEN to exceed this: */ - if (sve_vl_from_vq(bit_to_vq(b)) <= sve_max_virtualisable_vl) { + if (sve_vl_from_vq(__bit_to_vq(b)) <= sve_max_virtualisable_vl) { pr_warn("SVE: cpu%d: Unsupported vector length(s) present\n", smp_processor_id()); return -EINVAL; @@ -801,8 +782,8 @@ void __init sve_setup(void) * so sve_vq_map must have at least SVE_VQ_MIN set. * If something went wrong, at least try to patch it up: */ - if (WARN_ON(!test_bit(vq_to_bit(SVE_VQ_MIN), sve_vq_map))) - set_bit(vq_to_bit(SVE_VQ_MIN), sve_vq_map); + if (WARN_ON(!test_bit(__vq_to_bit(SVE_VQ_MIN), sve_vq_map))) + set_bit(__vq_to_bit(SVE_VQ_MIN), sve_vq_map); zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); sve_max_vl = sve_vl_from_vq((zcr & ZCR_ELx_LEN_MASK) + 1); @@ -831,7 +812,7 @@ void __init sve_setup(void) /* No virtualisable VLs? This is architecturally forbidden. */ sve_max_virtualisable_vl = SVE_VQ_MIN; else /* b + 1 < SVE_VQ_MAX */ - sve_max_virtualisable_vl = sve_vl_from_vq(bit_to_vq(b + 1)); + sve_max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b + 1)); if (sve_max_virtualisable_vl > sve_max_vl) sve_max_virtualisable_vl = sve_max_vl; From patchwork Fri May 3 12:43:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3306D1390 for ; Fri, 3 May 2019 12:45:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FFB7204FD for ; Fri, 3 May 2019 12:45:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 144FE28595; Fri, 3 May 2019 12:45:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACA70204FD for ; Fri, 3 May 2019 12:45:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728005AbfECMp5 (ORCPT ); Fri, 3 May 2019 08:45:57 -0400 Received: from foss.arm.com ([217.140.101.70]:60432 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727640AbfECMp5 (ORCPT ); Fri, 3 May 2019 08:45:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D4DC80D; Fri, 3 May 2019 05:45:56 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 661F63F220; Fri, 3 May 2019 05:45:53 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 21/56] KVM: arm/arm64: Add hook for arch-specific KVM initialisation Date: Fri, 3 May 2019 13:43:52 +0100 Message-Id: <20190503124427.190206-22-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch adds a kvm_arm_init_arch_resources() hook to perform subarch-specific initialisation when starting up KVM. This will be used in a subsequent patch for global SVE-related setup on arm64. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_host.h | 2 ++ virt/kvm/arm/arm.c | 4 ++++ 3 files changed, 8 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 770d73257ad9..a49ee01242ff 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -53,6 +53,8 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); +static inline int kvm_arm_init_arch_resources(void) { return 0; } + u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 205438a108f6..3e8950942591 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -58,6 +58,8 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); +static inline int kvm_arm_init_arch_resources(void) { return 0; } + int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 99c37384ba7b..c69e1370a5dc 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1664,6 +1664,10 @@ int kvm_arch_init(void *opaque) if (err) return err; + err = kvm_arm_init_arch_resources(); + if (err) + return err; + if (!in_hyp_mode) { err = init_hyp_mode(); if (err) From patchwork Fri May 3 12:43:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99D6A1398 for ; Fri, 3 May 2019 12:46:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87478204FD for ; Fri, 3 May 2019 12:46:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7ADB328595; Fri, 3 May 2019 12:46:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA4C3204FD for ; Fri, 3 May 2019 12:46:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727738AbfECMqA (ORCPT ); Fri, 3 May 2019 08:46:00 -0400 Received: from foss.arm.com ([217.140.101.70]:60452 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728009AbfECMqA (ORCPT ); Fri, 3 May 2019 08:46:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 23D5B15AD; Fri, 3 May 2019 05:46:00 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E0EA13F220; Fri, 3 May 2019 05:45:56 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 22/56] KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl Date: Fri, 3 May 2019 13:43:53 +0100 Message-Id: <20190503124427.190206-23-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Some aspects of vcpu configuration may be too complex to be completed inside KVM_ARM_VCPU_INIT. Thus, there may be a requirement for userspace to do some additional configuration before various other ioctls will work in a consistent way. In particular this will be the case for SVE, where userspace will need to negotiate the set of vector lengths to be made available to the guest before the vcpu becomes fully usable. In order to provide an explicit way for userspace to confirm that it has finished setting up a particular vcpu feature, this patch adds a new ioctl KVM_ARM_VCPU_FINALIZE. When userspace has opted into a feature that requires finalization, typically by means of a feature flag passed to KVM_ARM_VCPU_INIT, a matching call to KVM_ARM_VCPU_FINALIZE is now required before KVM_RUN or KVM_GET_REG_LIST is allowed. Individual features may impose additional restrictions where appropriate. No existing vcpu features are affected by this, so current userspace implementations will continue to work exactly as before, with no need to issue KVM_ARM_VCPU_FINALIZE. As implemented in this patch, KVM_ARM_VCPU_FINALIZE is currently a placeholder: no finalizable features exist yet, so ioctl is not required and will always yield EINVAL. Subsequent patches will add the finalization logic to make use of this ioctl for SVE. No functional change for existing userspace. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 4 ++++ arch/arm64/include/asm/kvm_host.h | 4 ++++ include/uapi/linux/kvm.h | 3 +++ virt/kvm/arm/arm.c | 18 ++++++++++++++++++ 4 files changed, 29 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index a49ee01242ff..e80cfc18412b 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -19,6 +19,7 @@ #ifndef __ARM_KVM_HOST_H__ #define __ARM_KVM_HOST_H__ +#include #include #include #include @@ -411,4 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) return 0; } +#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) +#define kvm_arm_vcpu_is_finalized(vcpu) true + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3e8950942591..98658f7dad68 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -23,6 +23,7 @@ #define __ARM64_KVM_HOST_H__ #include +#include #include #include #include @@ -625,4 +626,7 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); +#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) +#define kvm_arm_vcpu_is_finalized(vcpu) true + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index dc77a5a3648d..c3b8e7a31315 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1441,6 +1441,9 @@ struct kvm_enc_region { /* Available with KVM_CAP_HYPERV_CPUID */ #define KVM_GET_SUPPORTED_HV_CPUID _IOWR(KVMIO, 0xc1, struct kvm_cpuid2) +/* Available with KVM_CAP_ARM_SVE */ +#define KVM_ARM_VCPU_FINALIZE _IOW(KVMIO, 0xc2, int) + /* Secure Encrypted Virtualization command */ enum sev_cmd_id { /* Guest initialization commands */ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index c69e1370a5dc..9edbf0f676e7 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -545,6 +545,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (likely(vcpu->arch.has_run_once)) return 0; + if (!kvm_arm_vcpu_is_finalized(vcpu)) + return -EPERM; + vcpu->arch.has_run_once = true; if (likely(irqchip_in_kernel(kvm))) { @@ -1116,6 +1119,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp, if (unlikely(!kvm_vcpu_initialized(vcpu))) break; + r = -EPERM; + if (!kvm_arm_vcpu_is_finalized(vcpu)) + break; + r = -EFAULT; if (copy_from_user(®_list, user_list, sizeof(reg_list))) break; @@ -1169,6 +1176,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return kvm_arm_vcpu_set_events(vcpu, &events); } + case KVM_ARM_VCPU_FINALIZE: { + int what; + + if (!kvm_vcpu_initialized(vcpu)) + return -ENOEXEC; + + if (get_user(what, (const int __user *)argp)) + return -EFAULT; + + return kvm_arm_vcpu_finalize(vcpu, what); + } default: r = -EINVAL; } From patchwork Fri May 3 12:43:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928585 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B542F1398 for ; Fri, 3 May 2019 12:46:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2ED228334 for ; Fri, 3 May 2019 12:46:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 971E6285BD; Fri, 3 May 2019 12:46:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9F4E528334 for ; Fri, 3 May 2019 12:46:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728015AbfECMqE (ORCPT ); Fri, 3 May 2019 08:46:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60478 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728007AbfECMqE (ORCPT ); Fri, 3 May 2019 08:46:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A060F1684; Fri, 3 May 2019 05:46:03 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67AAA3F220; Fri, 3 May 2019 05:46:00 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 23/56] KVM: arm64/sve: Add pseudo-register for the guest's vector lengths Date: Fri, 3 May 2019 13:43:54 +0100 Message-Id: <20190503124427.190206-24-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to allow userspace to set and query the set of vector lengths visible to the guest. In the future, multiple register slices per SVE register may be visible through the ioctl interface. Once the set of slices has been determined we would not be able to allow the vector length set to be changed any more, in order to avoid userspace seeing inconsistent sets of registers. For this reason, this patch adds support for explicit finalization of the SVE configuration via the KVM_ARM_VCPU_FINALIZE ioctl. Finalization is the proper place to allocate the SVE register state storage in vcpu->arch.sve_state, so this patch adds that as appropriate. The data is freed via kvm_arch_vcpu_uninit(), which was previously a no-op on arm64. To simplify the logic for determining what vector lengths can be supported, some code is added to KVM init to work this out, in the kvm_arm_init_arch_resources() hook. The KVM_REG_ARM64_SVE_VLS pseudo-register is not exposed yet. Subsequent patches will allow SVE to be turned on for guest vcpus, making it visible. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 15 ++-- arch/arm64/include/uapi/asm/kvm.h | 5 ++ arch/arm64/kvm/guest.c | 114 +++++++++++++++++++++++++++++- arch/arm64/kvm/reset.c | 89 +++++++++++++++++++++++ 4 files changed, 215 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 98658f7dad68..5475cc4df35c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -23,7 +23,6 @@ #define __ARM64_KVM_HOST_H__ #include -#include #include #include #include @@ -50,6 +49,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS +/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */ #define KVM_VCPU_MAX_FEATURES 4 #define KVM_REQ_SLEEP \ @@ -59,10 +59,12 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); -static inline int kvm_arm_init_arch_resources(void) { return 0; } +extern unsigned int kvm_sve_max_vl; +int kvm_arm_init_arch_resources(void); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); +void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu); int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); @@ -353,6 +355,7 @@ struct kvm_vcpu_arch { #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ #define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ #define vcpu_has_sve(vcpu) (system_supports_sve() && \ ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) @@ -525,7 +528,6 @@ static inline bool kvm_arch_requires_vhe(void) static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} -static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} @@ -626,7 +628,10 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); -#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) -#define kvm_arm_vcpu_is_finalized(vcpu) true +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what); +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); + +#define kvm_arm_vcpu_sve_finalized(vcpu) \ + ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index ced760cc8478..6963b7e8062b 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -102,6 +102,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ +#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ struct kvm_vcpu_init { __u32 target; @@ -243,6 +244,10 @@ struct kvm_vcpu_events { ((n) << 5) | (i)) #define KVM_REG_ARM64_SVE_FFR(i) KVM_REG_ARM64_SVE_PREG(16, i) +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SVE_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ + KVM_REG_SIZE_U512 | 0xffff) + /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2aa80a59e2a2..086ab0508d69 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -206,6 +206,73 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return err; } +#define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64) +#define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) + +static bool vq_present( + const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)], + unsigned int vq) +{ + return (*vqs)[vq_word(vq)] & vq_mask(vq); +} + +static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + unsigned int max_vq, vq; + u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + + if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + return -EINVAL; + + memset(vqs, 0, sizeof(vqs)); + + max_vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); + for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) + if (sve_vq_available(vq)) + vqs[vq_word(vq)] |= vq_mask(vq); + + if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) + return -EFAULT; + + return 0; +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + unsigned int max_vq, vq; + u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + + if (kvm_arm_vcpu_sve_finalized(vcpu)) + return -EPERM; /* too late! */ + + if (WARN_ON(vcpu->arch.sve_state)) + return -EINVAL; + + if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) + return -EFAULT; + + max_vq = 0; + for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; ++vq) + if (vq_present(&vqs, vq)) + max_vq = vq; + + if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + return -EINVAL; + + for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) + if (vq_present(&vqs, vq) != sve_vq_available(vq)) + return -EINVAL; + + /* Can't run with no vector lengths at all: */ + if (max_vq < SVE_VQ_MIN) + return -EINVAL; + + /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ + vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); + + return 0; +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -296,7 +363,19 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) struct sve_state_reg_region region; char __user *uptr = (char __user *)reg->addr; - if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SVE_VLS) + return get_sve_vls(vcpu, reg); + + /* Otherwise, reg is an architectural SVE register... */ + + if (!kvm_arm_vcpu_sve_finalized(vcpu)) + return -EPERM; + + if (sve_reg_to_region(®ion, vcpu, reg)) return -ENOENT; if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -312,7 +391,19 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) struct sve_state_reg_region region; const char __user *uptr = (const char __user *)reg->addr; - if (!vcpu_has_sve(vcpu) || sve_reg_to_region(®ion, vcpu, reg)) + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SVE_VLS) + return set_sve_vls(vcpu, reg); + + /* Otherwise, reg is an architectural SVE register... */ + + if (!kvm_arm_vcpu_sve_finalized(vcpu)) + return -EPERM; + + if (sve_reg_to_region(®ion, vcpu, reg)) return -ENOENT; if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -426,7 +517,11 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) if (!vcpu_has_sve(vcpu)) return 0; - return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); + /* Policed by KVM_GET_REG_LIST: */ + WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + + 1; /* KVM_REG_ARM64_SVE_VLS */ } static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, @@ -441,6 +536,19 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) return 0; + /* Policed by KVM_GET_REG_LIST: */ + WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + + /* + * Enumerate this first, so that userspace can save/restore in + * the order reported by KVM_GET_REG_LIST: + */ + reg = KVM_REG_ARM64_SVE_VLS; + if (put_user(reg, uindices++)) + return -EFAULT; + + ++num_regs; + for (i = 0; i < slices; i++) { for (n = 0; n < SVE_NUM_ZREGS; n++) { reg = KVM_REG_ARM64_SVE_ZREG(n, i); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f16a5f8ff2b4..e7f9c06fdbbb 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -23,11 +23,14 @@ #include #include #include +#include +#include #include #include #include +#include #include #include #include @@ -99,6 +102,92 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) return r; } +unsigned int kvm_sve_max_vl; + +int kvm_arm_init_arch_resources(void) +{ + if (system_supports_sve()) { + kvm_sve_max_vl = sve_max_virtualisable_vl; + + /* + * The get_sve_reg()/set_sve_reg() ioctl interface will need + * to be extended with multiple register slice support in + * order to support vector lengths greater than + * SVE_VL_ARCH_MAX: + */ + if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX)) + kvm_sve_max_vl = SVE_VL_ARCH_MAX; + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_sve_max_vl < sve_max_vl) + pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", + kvm_sve_max_vl); + } + + return 0; +} + +/* + * Finalize vcpu's maximum SVE vector length, allocating + * vcpu->arch.sve_state as necessary. + */ +static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +{ + void *buf; + unsigned int vl; + + vl = vcpu->arch.sve_max_vl; + + /* + * Resposibility for these properties is shared between + * kvm_arm_init_arch_resources(), kvm_vcpu_enable_sve() and + * set_sve_vls(). Double-check here just to be sure: + */ + if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl || + vl > SVE_VL_ARCH_MAX)) + return -EIO; + + buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL); + if (!buf) + return -ENOMEM; + + vcpu->arch.sve_state = buf; + vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; + return 0; +} + +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what) +{ + switch (what) { + case KVM_ARM_VCPU_SVE: + if (!vcpu_has_sve(vcpu)) + return -EINVAL; + + if (kvm_arm_vcpu_sve_finalized(vcpu)) + return -EPERM; + + return kvm_vcpu_finalize_sve(vcpu); + } + + return -EINVAL; +} + +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) +{ + if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + return false; + + return true; +} + +void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) +{ + kfree(vcpu->arch.sve_state); +} + /** * kvm_reset_vcpu - sets core registers and sys_regs to reset value * @vcpu: The VCPU pointer From patchwork Fri May 3 12:43:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92CA31390 for ; Fri, 3 May 2019 12:46:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D21D204FD for ; Fri, 3 May 2019 12:46:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C8E128595; Fri, 3 May 2019 12:46:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4916204FD for ; Fri, 3 May 2019 12:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728018AbfECMqH (ORCPT ); Fri, 3 May 2019 08:46:07 -0400 Received: from foss.arm.com ([217.140.101.70]:60502 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727728AbfECMqH (ORCPT ); Fri, 3 May 2019 08:46:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2740D1682; Fri, 3 May 2019 05:46:07 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E3DDB3F220; Fri, 3 May 2019 05:46:03 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 24/56] KVM: arm64/sve: Allow userspace to enable SVE for vcpus Date: Fri, 3 May 2019 13:43:55 +0100 Message-Id: <20190503124427.190206-25-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Now that all the pieces are in place, this patch offers a new flag KVM_ARM_VCPU_SVE that userspace can pass to KVM_ARM_VCPU_INIT to turn on SVE for the guest, on a per-vcpu basis. As part of this, support for initialisation and reset of the SVE vector length set and registers is added in the appropriate places, as well as finally setting the KVM_ARM64_GUEST_HAS_SVE vcpu flag, to turn on the SVE support code. Allocation of the SVE register storage in vcpu->arch.sve_state is deferred until the SVE configuration is finalized, by which time the size of the registers is known. Setting the vector lengths supported by the vcpu is considered configuration of the emulated hardware rather than runtime configuration, so no support is offered for changing the vector lengths available to an existing vcpu across reset. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 3 +-- arch/arm64/kvm/reset.c | 43 ++++++++++++++++++++++++++++++- 2 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5475cc4df35c..9d57cf8be879 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -49,8 +49,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */ -#define KVM_VCPU_MAX_FEATURES 4 +#define KVM_VCPU_MAX_FEATURES 5 #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e7f9c06fdbbb..32c5ac0a3872 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -20,10 +20,12 @@ */ #include +#include #include #include #include #include +#include #include #include @@ -37,6 +39,7 @@ #include #include #include +#include /* Maximum phys_shift supported for any VM on this host */ static u32 kvm_ipa_limit; @@ -130,6 +133,27 @@ int kvm_arm_init_arch_resources(void) return 0; } +static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) +{ + if (!system_supports_sve()) + return -EINVAL; + + /* Verify that KVM startup enforced this when SVE was detected: */ + if (WARN_ON(!has_vhe())) + return -EINVAL; + + vcpu->arch.sve_max_vl = kvm_sve_max_vl; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SVE_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; + + return 0; +} + /* * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. @@ -188,13 +212,20 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) kfree(vcpu->arch.sve_state); } +static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +{ + if (vcpu_has_sve(vcpu)) + memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); +} + /** * kvm_reset_vcpu - sets core registers and sys_regs to reset value * @vcpu: The VCPU pointer * * This function finds the right table above and sets the registers on * the virtual CPU struct to their architecturally defined reset - * values. + * values, except for registers whose reset is deferred until + * kvm_arm_vcpu_finalize(). * * Note: This function can be called from two paths: The KVM_ARM_VCPU_INIT * ioctl or as part of handling a request issued by another VCPU in the PSCI @@ -217,6 +248,16 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); + if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) { + ret = kvm_vcpu_enable_sve(vcpu); + if (ret) + goto out; + } + } else { + kvm_vcpu_reset_sve(vcpu); + } + switch (vcpu->arch.target) { default: if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { From patchwork Fri May 3 12:43:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 79C181398 for ; Fri, 3 May 2019 12:46:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67CA0204FD for ; Fri, 3 May 2019 12:46:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AE6328595; Fri, 3 May 2019 12:46:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 030F1204FD for ; Fri, 3 May 2019 12:46:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727640AbfECMqL (ORCPT ); Fri, 3 May 2019 08:46:11 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60528 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728008AbfECMqL (ORCPT ); Fri, 3 May 2019 08:46:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A22D716A3; Fri, 3 May 2019 05:46:10 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6A44B3F220; Fri, 3 May 2019 05:46:07 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 25/56] KVM: arm64: Add a capability to advertise SVE support Date: Fri, 3 May 2019 13:43:56 +0100 Message-Id: <20190503124427.190206-26-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin To provide a uniform way to check for KVM SVE support amongst other features, this patch adds a suitable capability KVM_CAP_ARM_SVE, and reports it as present when SVE is available. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei Signed-off-by: Marc Zyngier --- arch/arm64/kvm/reset.c | 3 +++ include/uapi/linux/kvm.h | 1 + 2 files changed, 4 insertions(+) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 32c5ac0a3872..f13378d0a0ad 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -98,6 +98,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_VM_IPA_SIZE: r = kvm_ipa_limit; break; + case KVM_CAP_ARM_SVE: + r = system_supports_sve(); + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c3b8e7a31315..1d564445b515 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_VM_IPA_SIZE 165 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 #define KVM_CAP_HYPERV_CPUID 167 +#define KVM_CAP_ARM_SVE 168 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Fri May 3 12:43:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F4D51390 for ; Fri, 3 May 2019 12:46:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E980204FD for ; Fri, 3 May 2019 12:46:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6280C28595; Fri, 3 May 2019 12:46:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C412204FD for ; Fri, 3 May 2019 12:46:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728024AbfECMqP (ORCPT ); Fri, 3 May 2019 08:46:15 -0400 Received: from foss.arm.com ([217.140.101.70]:60556 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727726AbfECMqO (ORCPT ); Fri, 3 May 2019 08:46:14 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2801D169E; Fri, 3 May 2019 05:46:14 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E53323F220; Fri, 3 May 2019 05:46:10 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 26/56] KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG Date: Fri, 3 May 2019 13:43:57 +0100 Message-Id: <20190503124427.190206-27-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin KVM_GET_ONE_REG and KVM_SET_ONE_REG return some error codes that are not documented (but hopefully not surprising either). To give an indication of what these may mean, this patch adds brief documentation. Signed-off-by: Dave Martin Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 2d4f7ce5e967..cd920dd1195c 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1871,6 +1871,9 @@ Architectures: all Type: vcpu ioctl Parameters: struct kvm_one_reg (in) Returns: 0 on success, negative value on failure +Errors: +  ENOENT:   no such register +  EINVAL:   other errors, such as bad size encoding for a known register struct kvm_one_reg { __u64 id; @@ -2192,6 +2195,9 @@ Architectures: all Type: vcpu ioctl Parameters: struct kvm_one_reg (in and out) Returns: 0 on success, negative value on failure +Errors: +  ENOENT:   no such register +  EINVAL:   other errors, such as bad size encoding for a known register This ioctl allows to receive the value of a single register implemented in a vcpu. The register to read is indicated by the "id" field of the From patchwork Fri May 3 12:43:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A6CB1390 for ; Fri, 3 May 2019 12:46:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57FEA28334 for ; Fri, 3 May 2019 12:46:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4958E285BD; Fri, 3 May 2019 12:46:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83447204FD for ; Fri, 3 May 2019 12:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728029AbfECMqS (ORCPT ); Fri, 3 May 2019 08:46:18 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60574 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727724AbfECMqS (ORCPT ); Fri, 3 May 2019 08:46:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1FF3165C; Fri, 3 May 2019 05:46:17 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B23B3F220; Fri, 3 May 2019 05:46:14 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 27/56] KVM: arm64/sve: Document KVM API extensions for SVE Date: Fri, 3 May 2019 13:43:58 +0100 Message-Id: <20190503124427.190206-28-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin This patch adds sections to the KVM API documentation describing the extensions for supporting the Scalable Vector Extension (SVE) in guests. Signed-off-by: Dave Martin Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 132 +++++++++++++++++++++++++++++- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index cd920dd1195c..68509dee23e8 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1873,6 +1873,7 @@ Parameters: struct kvm_one_reg (in) Returns: 0 on success, negative value on failure Errors:  ENOENT:   no such register +  EPERM:    register access forbidden for architecture-dependent reasons  EINVAL:   other errors, such as bad size encoding for a known register struct kvm_one_reg { @@ -2127,13 +2128,20 @@ Specifically: 0x6030 0000 0010 004c SPSR_UND 64 spsr[KVM_SPSR_UND] 0x6030 0000 0010 004e SPSR_IRQ 64 spsr[KVM_SPSR_IRQ] 0x6060 0000 0010 0050 SPSR_FIQ 64 spsr[KVM_SPSR_FIQ] - 0x6040 0000 0010 0054 V0 128 fp_regs.vregs[0] - 0x6040 0000 0010 0058 V1 128 fp_regs.vregs[1] + 0x6040 0000 0010 0054 V0 128 fp_regs.vregs[0] (*) + 0x6040 0000 0010 0058 V1 128 fp_regs.vregs[1] (*) ... - 0x6040 0000 0010 00d0 V31 128 fp_regs.vregs[31] + 0x6040 0000 0010 00d0 V31 128 fp_regs.vregs[31] (*) 0x6020 0000 0010 00d4 FPSR 32 fp_regs.fpsr 0x6020 0000 0010 00d5 FPCR 32 fp_regs.fpcr +(*) These encodings are not accepted for SVE-enabled vcpus. See + KVM_ARM_VCPU_INIT. + + The equivalent register content can be accessed via bits [127:0] of + the corresponding SVE Zn registers instead for vcpus that have SVE + enabled (see below). + arm64 CCSIDR registers are demultiplexed by CSSELR value: 0x6020 0000 0011 00 @@ -2143,6 +2151,61 @@ arm64 system registers have the following id bit patterns: arm64 firmware pseudo-registers have the following bit pattern: 0x6030 0000 0014 +arm64 SVE registers have the following bit patterns: + 0x6080 0000 0015 00 Zn bits[2048*slice + 2047 : 2048*slice] + 0x6050 0000 0015 04 Pn bits[256*slice + 255 : 256*slice] + 0x6050 0000 0015 060 FFR bits[256*slice + 255 : 256*slice] + 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-register + +Access to slices beyond the maximum vector length configured for the +vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT. + +These registers are only accessible on vcpus for which SVE is enabled. +See KVM_ARM_VCPU_INIT for details. + +In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not +accessible until the vcpu's SVE configuration has been finalized +using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT +and KVM_ARM_VCPU_FINALIZE for more information about this procedure. + +KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector +lengths supported by the vcpu to be discovered and configured by +userspace. When transferred to or from user memory via KVM_GET_ONE_REG +or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and +encodes the set of vector lengths as follows: + +__u64 vector_lengths[8]; + +if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX && + ((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1)) + /* Vector length vq * 16 bytes supported */ +else + /* Vector length vq * 16 bytes not supported */ + +(**) The maximum value vq for which the above condition is true is +max_vq. This is the maximum vector length available to the guest on +this vcpu, and determines which register slices are visible through +this ioctl interface. + +(See Documentation/arm64/sve.txt for an explanation of the "vq" +nomenclature.) + +KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. +KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that +the host supports. + +Userspace may subsequently modify it if desired until the vcpu's SVE +configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). + +Apart from simply removing all vector lengths from the host set that +exceed some value, support for arbitrarily chosen sets of vector lengths +is hardware-dependent and may not be available. Attempting to configure +an invalid set of vector lengths via KVM_SET_ONE_REG will fail with +EINVAL. + +After the vcpu's SVE configuration is finalized, further attempts to +write this register will fail with EPERM. + MIPS registers are mapped using the lower 32 bits. The upper 16 of that is the register group type: @@ -2197,6 +2260,7 @@ Parameters: struct kvm_one_reg (in and out) Returns: 0 on success, negative value on failure Errors:  ENOENT:   no such register +  EPERM:    register access forbidden for architecture-dependent reasons  EINVAL:   other errors, such as bad size encoding for a known register This ioctl allows to receive the value of a single register implemented @@ -2690,6 +2754,33 @@ Possible features: - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. Depends on KVM_CAP_ARM_PMU_V3. + - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SVE. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SVE_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable archietctural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR; + + - KVM_REG_ARM64_SVE_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + + - the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. 4.83 KVM_ARM_PREFERRED_TARGET @@ -3904,6 +3995,41 @@ number of valid entries in the 'entries' array, which is then filled. 'index' and 'flags' fields in 'struct kvm_cpuid_entry2' are currently reserved, userspace should not expect to get any particular value there. +4.119 KVM_ARM_VCPU_FINALIZE + +Capability: KVM_CAP_ARM_SVE +Architectures: arm, arm64 +Type: vcpu ioctl +Parameters: int feature (in) +Returns: 0 on success, -1 on error +Errors: + EPERM: feature not enabled, needs configuration, or already finalized + EINVAL: unknown feature + +Recognised values for feature: + arm64 KVM_ARM_VCPU_SVE + +Finalizes the configuration of the specified vcpu feature. + +The vcpu must already have been initialised, enabling the affected feature, by +means of a successful KVM_ARM_VCPU_INIT call with the appropriate flag set in +features[]. + +For affected vcpu features, this is a mandatory step that must be performed +before the vcpu is fully usable. + +Between KVM_ARM_VCPU_INIT and KVM_ARM_VCPU_FINALIZE, the feature may be +configured by use of ioctls such as KVM_SET_ONE_REG. The exact configuration +that should be performaned and how to do it are feature-dependent. + +Other calls that depend on a particular feature being finalized, such as +KVM_RUN, KVM_GET_REG_LIST, KVM_GET_ONE_REG and KVM_SET_ONE_REG, will fail with +-EPERM unless the feature has already been finalized by means of a +KVM_ARM_VCPU_FINALIZE call. + +See KVM_ARM_VCPU_INIT for details of vcpu features that require finalization +using this ioctl. + 5. The kvm_run structure ------------------------ From patchwork Fri May 3 12:43:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 709BE1390 for ; Fri, 3 May 2019 12:46:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 60B24204FD for ; Fri, 3 May 2019 12:46:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 51EAF28595; Fri, 3 May 2019 12:46:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED81E204FD for ; Fri, 3 May 2019 12:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728033AbfECMqV (ORCPT ); Fri, 3 May 2019 08:46:21 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60594 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727997AbfECMqV (ORCPT ); Fri, 3 May 2019 08:46:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 281E116A3; Fri, 3 May 2019 05:46:21 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E4E863F220; Fri, 3 May 2019 05:46:17 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 28/56] arm64: KVM: Fix system register enumeration Date: Fri, 3 May 2019 13:43:59 +0100 Message-Id: <20190503124427.190206-29-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The introduction of the SVE registers to userspace started with a refactoring of the way we expose any register via the ONE_REG interface. Unfortunately, this change doesn't exactly behave as expected if the number of registers is non-zero and consider everything to be an error. The visible result is that QEMU barfs very early when creating vcpus. Make sure we only exit early in case there is an actual error, rather than a positive number of registers... Fixes: be25bbb392fa ("KVM: arm64: Factor out core register ID enumeration") Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 086ab0508d69..4f7b26bbf671 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -604,22 +604,22 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) int ret; ret = copy_core_reg_indices(vcpu, uindices); - if (ret) + if (ret < 0) return ret; uindices += ret; ret = copy_sve_reg_indices(vcpu, uindices); - if (ret) + if (ret < 0) return ret; uindices += ret; ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); - if (ret) + if (ret < 0) return ret; uindices += kvm_arm_get_fw_num_regs(vcpu); ret = copy_timer_indices(vcpu, uindices); - if (ret) + if (ret < 0) return ret; uindices += NUM_TIMER_REGS; From patchwork Fri May 3 12:44:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B94501390 for ; Fri, 3 May 2019 12:46:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A849C204FD for ; Fri, 3 May 2019 12:46:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B98828595; Fri, 3 May 2019 12:46:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 37CFC204FD for ; Fri, 3 May 2019 12:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728034AbfECMqZ (ORCPT ); Fri, 3 May 2019 08:46:25 -0400 Received: from foss.arm.com ([217.140.101.70]:60620 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727997AbfECMqZ (ORCPT ); Fri, 3 May 2019 08:46:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A26C816A3; Fri, 3 May 2019 05:46:24 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B2773F220; Fri, 3 May 2019 05:46:21 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 29/56] arm64/sve: Clarify vq map semantics Date: Fri, 3 May 2019 13:44:00 +0100 Message-Id: <20190503124427.190206-30-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Currently the meanings of sve_vq_map and the ancillary helpers __bit_to_vq() and __vq_to_bit() are not clearly explained. This patch makes the explanatory comment clearer, and removes the duplicate comment from fpsimd.h. The WARN_ON() currently present in __bit_to_vq() confuses the intended use of this helper. Since these are low-level helpers not intended for general-purpose use anyway, it is better not to make guesses about how these functions will be used: rather, this patch removes the WARN_ON() and relies on callers to use the helpers sensibly. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/fpsimd.h | 4 ---- arch/arm64/kernel/fpsimd.c | 7 ++++++- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index ad6d2e41eb37..df62bbd33a9a 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -92,7 +92,6 @@ extern u64 read_zcr_features(void); extern int __ro_after_init sve_max_vl; extern int __ro_after_init sve_max_virtualisable_vl; -/* Set of available vector lengths, as vq_to_bit(vq): */ extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); /* @@ -107,9 +106,6 @@ static inline unsigned int __vq_to_bit(unsigned int vq) static inline unsigned int __bit_to_vq(unsigned int bit) { - if (WARN_ON(bit >= SVE_VQ_MAX)) - bit = SVE_VQ_MAX - 1; - return SVE_VQ_MAX - bit; } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 577296bba730..56afa40263d9 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -135,10 +135,15 @@ static int sve_default_vl = -1; /* Maximum supported vector length across all CPUs (initially poisoned) */ int __ro_after_init sve_max_vl = SVE_VL_MIN; int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN; -/* Set of available vector lengths, as vq_to_bit(vq): */ + +/* + * Set of available vector lengths, + * where length vq encoded as bit __vq_to_bit(vq): + */ __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); /* Set of vector lengths present on at least one cpu: */ static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); + static void __percpu *efi_sve_state; #else /* ! CONFIG_ARM64_SVE */ From patchwork Fri May 3 12:44:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38AAD1398 for ; Fri, 3 May 2019 12:46:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 260F9204FD for ; Fri, 3 May 2019 12:46:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1973A28595; Fri, 3 May 2019 12:46:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB033204FD for ; Fri, 3 May 2019 12:46:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728039AbfECMq2 (ORCPT ); Fri, 3 May 2019 08:46:28 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60642 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727737AbfECMq2 (ORCPT ); Fri, 3 May 2019 08:46:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 287CD16A3; Fri, 3 May 2019 05:46:28 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E539B3F220; Fri, 3 May 2019 05:46:24 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 30/56] KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE Date: Fri, 3 May 2019 13:44:01 +0100 Message-Id: <20190503124427.190206-31-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The introduction of kvm_arm_init_arch_resources() looks like premature factoring, since nothing else uses this hook yet and it is not clear what will use it in the future. For now, let's not pretend that this is a general thing: This patch simply renames the function to kvm_arm_init_sve(), retaining the arm stub version under the new name. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/reset.c | 2 +- virt/kvm/arm/arm.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index e80cfc18412b..d95627393353 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -54,7 +54,7 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); -static inline int kvm_arm_init_arch_resources(void) { return 0; } +static inline int kvm_arm_init_sve(void) { return 0; } u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); int __attribute_const__ kvm_target_cpu(void); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9d57cf8be879..6adf08ba9277 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -59,7 +59,7 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); extern unsigned int kvm_sve_max_vl; -int kvm_arm_init_arch_resources(void); +int kvm_arm_init_sve(void); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f13378d0a0ad..8847f389f56d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -110,7 +110,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) unsigned int kvm_sve_max_vl; -int kvm_arm_init_arch_resources(void) +int kvm_arm_init_sve(void) { if (system_supports_sve()) { kvm_sve_max_vl = sve_max_virtualisable_vl; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 9edbf0f676e7..7039c99cc217 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1682,7 +1682,7 @@ int kvm_arch_init(void *opaque) if (err) return err; - err = kvm_arm_init_arch_resources(); + err = kvm_arm_init_sve(); if (err) return err; From patchwork Fri May 3 12:44:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A985A1398 for ; Fri, 3 May 2019 12:46:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 996D9204FD for ; Fri, 3 May 2019 12:46:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8DDE328595; Fri, 3 May 2019 12:46:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 367E9204FD for ; Fri, 3 May 2019 12:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727997AbfECMqc (ORCPT ); Fri, 3 May 2019 08:46:32 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60658 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727737AbfECMqb (ORCPT ); Fri, 3 May 2019 08:46:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3B7E16A3; Fri, 3 May 2019 05:46:31 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B8B53F220; Fri, 3 May 2019 05:46:28 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 31/56] KVM: arm: Make vcpu finalization stubs into inline functions Date: Fri, 3 May 2019 13:44:02 +0100 Message-Id: <20190503124427.190206-32-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The vcpu finalization stubs kvm_arm_vcpu_finalize() and kvm_arm_vcpu_is_finalized() are currently #defines for ARM, which limits the type-checking that the compiler can do at runtime. The only reason for them to be #defines was to avoid reliance on the definition of struct kvm_vcpu, which is not available here due to circular #include problems. However, because these are stubs containing no code, they don't need the definition of struct kvm_vcpu after all; only a declaration is needed (which is available already). So in the interests of cleanliness, this patch converts them to inline functions. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index d95627393353..7feddacbc207 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -412,7 +412,14 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) return 0; } -#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) -#define kvm_arm_vcpu_is_finalized(vcpu) true +static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what) +{ + return -EINVAL; +} + +static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) +{ + return true; +} #endif /* __ARM_KVM_HOST_H__ */ From patchwork Fri May 3 12:44:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E0521398 for ; Fri, 3 May 2019 12:46:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DA17204FD for ; Fri, 3 May 2019 12:46:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41C2B28595; Fri, 3 May 2019 12:46:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3240204FD for ; Fri, 3 May 2019 12:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728041AbfECMqf (ORCPT ); Fri, 3 May 2019 08:46:35 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60676 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727737AbfECMqf (ORCPT ); Fri, 3 May 2019 08:46:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 299B1374; Fri, 3 May 2019 05:46:35 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E6A2B3F220; Fri, 3 May 2019 05:46:31 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 32/56] KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNs Date: Fri, 3 May 2019 13:44:03 +0100 Message-Id: <20190503124427.190206-33-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and sve_id_visibility(), we should never call {get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu). To avoid the code giving the impression that it is valid for these functions to be called in this situation, and to help the compiler make the right optimisation decisions, this patch adds WARN_ON() for these cases. Given the way the logic is spread out, this seems preferable to dropping the checks altogether. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 09e9b0625911..7046c7686321 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1144,7 +1144,7 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, { u64 val; - if (!vcpu_has_sve(vcpu)) + if (WARN_ON(!vcpu_has_sve(vcpu))) return -ENOENT; val = guest_id_aa64zfr0_el1(vcpu); @@ -1159,7 +1159,7 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, int err; u64 val; - if (!vcpu_has_sve(vcpu)) + if (WARN_ON(!vcpu_has_sve(vcpu))) return -ENOENT; err = reg_from_user(&val, uaddr, id); From patchwork Fri May 3 12:44:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B9B61390 for ; Fri, 3 May 2019 12:46:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A62A204FD for ; Fri, 3 May 2019 12:46:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0EB0028595; Fri, 3 May 2019 12:46:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8EC81204FD for ; Fri, 3 May 2019 12:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728047AbfECMqj (ORCPT ); Fri, 3 May 2019 08:46:39 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60698 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727791AbfECMqj (ORCPT ); Fri, 3 May 2019 08:46:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A4FA6374; Fri, 3 May 2019 05:46:38 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CB823F220; Fri, 3 May 2019 05:46:35 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 33/56] KVM: arm64/sve: Clean up UAPI register ID definitions Date: Fri, 3 May 2019 13:44:04 +0100 Message-Id: <20190503124427.190206-34-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Currently, the SVE register ID macros are not all defined in the same way, and advertise the fact that FFR maps onto the nonexistent predicate register P16. This is really just for kernel convenience, and may lead userspace into bad habits. Instead, this patch masks the ID macro arguments so that architecturally invalid register numbers will not be passed through any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG() and _PREG() macros are defined. Rather than plugging in magic numbers for the number of Z- and P- registers and the maximum possible number of register slices, this patch provides definitions for those too. Userspace is going to need them in any case, and it makes sense for them to come from . sve_reg_to_region() uses convenience constants that are defined in a different way, and also makes use of the fact that the FFR IDs are really contiguous with the P15 IDs, so this patch retains the existing convenience constants in guest.c, supplemented with a couple of sanity checks to check for consistency with the UAPI header. Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface") Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/include/uapi/asm/kvm.h | 32 ++++++++++++++++++++++--------- arch/arm64/kvm/guest.c | 9 +++++++++ 2 files changed, 32 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 6963b7e8062b..2a04ef015469 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -35,6 +35,7 @@ #include #include #include +#include #define __KVM_HAVE_GUEST_DEBUG #define __KVM_HAVE_IRQ_LINE @@ -233,16 +234,29 @@ struct kvm_vcpu_events { /* Z- and P-regs occupy blocks at the following offsets within this range: */ #define KVM_REG_ARM64_SVE_ZREG_BASE 0 #define KVM_REG_ARM64_SVE_PREG_BASE 0x400 +#define KVM_REG_ARM64_SVE_FFR_BASE 0x600 -#define KVM_REG_ARM64_SVE_ZREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ - KVM_REG_ARM64_SVE_ZREG_BASE | \ - KVM_REG_SIZE_U2048 | \ - ((n) << 5) | (i)) -#define KVM_REG_ARM64_SVE_PREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ - KVM_REG_ARM64_SVE_PREG_BASE | \ - KVM_REG_SIZE_U256 | \ - ((n) << 5) | (i)) -#define KVM_REG_ARM64_SVE_FFR(i) KVM_REG_ARM64_SVE_PREG(16, i) +#define KVM_ARM64_SVE_NUM_ZREGS __SVE_NUM_ZREGS +#define KVM_ARM64_SVE_NUM_PREGS __SVE_NUM_PREGS + +#define KVM_ARM64_SVE_MAX_SLICES 32 + +#define KVM_REG_ARM64_SVE_ZREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SVE_PREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \ + KVM_REG_SIZE_U256 | \ + (((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SVE_FFR(i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \ + KVM_REG_SIZE_U256 | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SVE_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 4f7b26bbf671..2e449e1dea73 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -325,6 +325,15 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, size_t sve_state_size; + const u64 last_preg_id = KVM_REG_ARM64_SVE_PREG(SVE_NUM_PREGS - 1, + SVE_NUM_SLICES - 1); + + /* Verify that the P-regs and FFR really do have contiguous IDs: */ + BUILD_BUG_ON(KVM_REG_ARM64_SVE_FFR(0) != last_preg_id + 1); + + /* Verify that we match the UAPI header: */ + BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES); + /* Only the first slice ever exists, for now: */ if ((reg->id & SVE_REG_SLICE_MASK) != 0) return -ENOENT; From patchwork Fri May 3 12:44:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEA181390 for ; Fri, 3 May 2019 12:46:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D3D3204FD for ; Fri, 3 May 2019 12:46:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9121428595; Fri, 3 May 2019 12:46:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1979E204FD for ; Fri, 3 May 2019 12:46:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728042AbfECMqn (ORCPT ); Fri, 3 May 2019 08:46:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60722 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726969AbfECMqm (ORCPT ); Fri, 3 May 2019 08:46:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B30780D; Fri, 3 May 2019 05:46:42 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E81263F220; Fri, 3 May 2019 05:46:38 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 34/56] KVM: arm64/sve: Miscellaneous tidyups in guest.c Date: Fri, 3 May 2019 13:44:05 +0100 Message-Id: <20190503124427.190206-35-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin * Remove a few redundant blank lines that are stylistically inconsistent with code already in guest.c and are just taking up space. * Delete a couple of pointless empty default cases from switch statements whose behaviour is otherwise obvious anyway. * Fix some typos and consolidate some redundantly duplicated comments. * Respell the slice index check in sve_reg_to_region() as "> 0" to be more consistent with what is logically being checked here (i.e., "is the slice index too large"), even though we don't try to cope with multiple slices yet. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2e449e1dea73..f5ff7aea25aa 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -290,9 +290,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) /* - * number of register slices required to cover each whole SVE register on vcpu - * NOTE: If you are tempted to modify this, you must also to rework - * sve_reg_to_region() to match: + * Number of register slices required to cover each whole SVE register. + * NOTE: Only the first slice every exists, for now. + * If you are tempted to modify this, you must also rework sve_reg_to_region() + * to match: */ #define vcpu_sve_slices(vcpu) 1 @@ -334,8 +335,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, /* Verify that we match the UAPI header: */ BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES); - /* Only the first slice ever exists, for now: */ - if ((reg->id & SVE_REG_SLICE_MASK) != 0) + if ((reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); @@ -520,7 +520,6 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) { - /* Only the first slice ever exists, for now */ const unsigned int slices = vcpu_sve_slices(vcpu); if (!vcpu_has_sve(vcpu)) @@ -536,7 +535,6 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, u64 __user *uindices) { - /* Only the first slice ever exists, for now */ const unsigned int slices = vcpu_sve_slices(vcpu); u64 reg; unsigned int i, n; @@ -555,7 +553,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_VLS; if (put_user(reg, uindices++)) return -EFAULT; - ++num_regs; for (i = 0; i < slices; i++) { @@ -563,7 +560,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_ZREG(n, i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } @@ -571,14 +567,12 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_PREG(n, i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } reg = KVM_REG_ARM64_SVE_FFR(i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } @@ -645,7 +639,6 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_CORE: return get_core_reg(vcpu, reg); case KVM_REG_ARM_FW: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); - default: break; /* fall through */ } if (is_timer_reg(reg->id)) @@ -664,7 +657,6 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg); case KVM_REG_ARM_FW: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); - default: break; /* fall through */ } if (is_timer_reg(reg->id)) From patchwork Fri May 3 12:44:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AC651398 for ; Fri, 3 May 2019 12:46:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 395FD204FD for ; Fri, 3 May 2019 12:46:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DB7228595; Fri, 3 May 2019 12:46:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BFAB204FD for ; Fri, 3 May 2019 12:46:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728049AbfECMqq (ORCPT ); Fri, 3 May 2019 08:46:46 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60742 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728044AbfECMqq (ORCPT ); Fri, 3 May 2019 08:46:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5BF6374; Fri, 3 May 2019 05:46:45 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E4283F220; Fri, 3 May 2019 05:46:42 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 35/56] KVM: arm64/sve: Make register ioctl access errors more consistent Date: Fri, 3 May 2019 13:44:06 +0100 Message-Id: <20190503124427.190206-36-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Currently, the way error codes are generated when processing the SVE register access ioctls in a bit haphazard. This patch refactors the code so that the behaviour is more consistent: now, -EINVAL should be returned only for unrecognised register IDs or when some other runtime error occurs. -ENOENT is returned for register IDs that are recognised, but whose corresponding register (or slice) does not exist for the vcpu. To this end, in {get,set}_sve_reg() we now delegate the vcpu_has_sve() check down into {get,set}_sve_vls() and sve_reg_to_region(). The KVM_REG_ARM64_SVE_VLS special case is picked off first, then sve_reg_to_region() plays the role of exhaustively validating or rejecting the register ID and (where accepted) computing the applicable register region as before. sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not returned prematurely, before checking whether reg->id is in a recognised range. -EPERM is now only returned when an attempt is made to access an actually existing register slice on an unfinalized vcpu. Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access ioctl interface") Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths") Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 52 +++++++++++++++++++++++++----------------- 1 file changed, 31 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index f5ff7aea25aa..e45a042c0628 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -221,6 +221,9 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) unsigned int max_vq, vq; u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) return -EINVAL; @@ -242,6 +245,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) unsigned int max_vq, vq; u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + if (kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; /* too late! */ @@ -304,7 +310,10 @@ struct sve_state_reg_region { unsigned int upad; /* extra trailing padding in user memory */ }; -/* Get sanitised bounds for user/kernel SVE register copy */ +/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ static int sve_reg_to_region(struct sve_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, /* Verify that we match the UAPI header: */ BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES); - if ((reg->id & SVE_REG_SLICE_MASK) > 0) - return -ENOENT; - - vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); - reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) { + if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); + reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; reqlen = KVM_SVE_ZREG_SIZE; maxlen = SVE_SIG_ZREG_SIZE(vq); } else if (reg->id >= preg_id_min && reg->id <= preg_id_max) { + if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); + reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; reqlen = KVM_SVE_PREG_SIZE; maxlen = SVE_SIG_PREG_SIZE(vq); } else { - return -ENOENT; + return -EINVAL; } sve_state_size = vcpu_sve_state_size(vcpu); @@ -369,24 +383,22 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; struct sve_state_reg_region region; char __user *uptr = (char __user *)reg->addr; - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SVE_VLS) return get_sve_vls(vcpu, reg); - /* Otherwise, reg is an architectural SVE register... */ + /* Try to interpret reg ID as an architectural SVE register... */ + ret = sve_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; if (!kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; - if (sve_reg_to_region(®ion, vcpu, reg)) - return -ENOENT; - if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -397,24 +409,22 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; struct sve_state_reg_region region; const char __user *uptr = (const char __user *)reg->addr; - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SVE_VLS) return set_sve_vls(vcpu, reg); - /* Otherwise, reg is an architectural SVE register... */ + /* Try to interpret reg ID as an architectural SVE register... */ + ret = sve_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; if (!kvm_arm_vcpu_sve_finalized(vcpu)) return -EPERM; - if (sve_reg_to_region(®ion, vcpu, reg)) - return -ENOENT; - if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT; From patchwork Fri May 3 12:44:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 741C51398 for ; Fri, 3 May 2019 12:46:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 63AC8204FD for ; Fri, 3 May 2019 12:46:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 550B228595; Fri, 3 May 2019 12:46:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F26F5204FD for ; Fri, 3 May 2019 12:46:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727981AbfECMqu (ORCPT ); Fri, 3 May 2019 08:46:50 -0400 Received: from foss.arm.com ([217.140.101.70]:60762 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727524AbfECMqt (ORCPT ); Fri, 3 May 2019 08:46:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BBDC80D; Fri, 3 May 2019 05:46:49 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E8B933F220; Fri, 3 May 2019 05:46:45 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 36/56] KVM: arm64/sve: WARN when avoiding divide-by-zero in sve_reg_to_region() Date: Fri, 3 May 2019 13:44:07 +0100 Message-Id: <20190503124427.190206-37-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin sve_reg_to_region() currently passes the result of vcpu_sve_state_size() to array_index_nospec(), effectively leading to a divide / modulo operation. Currently the code bails out and returns -EINVAL if vcpu_sve_state_size() turns out to be zero, in order to avoid going ahead and attempting to divide by zero. This is reasonable, but it should only happen if the kernel contains some other bug that allowed this code to be reached without the vcpu having been properly initialised. To make it clear that this is a defence against bugs rather than something that the user should be able to trigger, this patch marks the check with WARN_ON(). Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index e45a042c0628..73044e3f8706 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -371,7 +371,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, } sve_state_size = vcpu_sve_state_size(vcpu); - if (!sve_state_size) + if (WARN_ON(!sve_state_size)) return -EINVAL; region->koffset = array_index_nospec(reqoffset, sve_state_size); From patchwork Fri May 3 12:44:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F04A1390 for ; Fri, 3 May 2019 12:46:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E2EB204FD for ; Fri, 3 May 2019 12:46:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0274A28595; Fri, 3 May 2019 12:46:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73336204FD for ; Fri, 3 May 2019 12:46:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728044AbfECMqx (ORCPT ); Fri, 3 May 2019 08:46:53 -0400 Received: from foss.arm.com ([217.140.101.70]:60784 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727562AbfECMqx (ORCPT ); Fri, 3 May 2019 08:46:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6C9615AD; Fri, 3 May 2019 05:46:52 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6ECD63F220; Fri, 3 May 2019 05:46:49 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 37/56] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing Date: Fri, 3 May 2019 13:44:08 +0100 Message-Id: <20190503124427.190206-38-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin A complicated DIV_ROUND_UP() expression is currently written out explicitly in multiple places in order to specify the size of the bitmap exchanged with userspace to represent the value of the KVM_REG_ARM64_SVE_VLS pseudo-register. Userspace currently has no direct way to work this out either: for documentation purposes, the size is just quoted as 8 u64s. To make this more intuitive, this patch replaces these with a single define, which is also exported to userspace as KVM_ARM64_SVE_VLS_WORDS. Since the number of words in a bitmap is just the index of the last word used + 1, this patch expresses the bound that way instead. This should make it clearer what is being expressed. For userspace convenience, the minimum and maximum possible vector lengths relevant to the KVM ABI are exposed to UAPI as KVM_ARM64_SVE_VQ_MIN, KVM_ARM64_SVE_VQ_MAX. Since the only direct use for these at present is manipulation of KVM_REG_ARM64_SVE_VLS, no corresponding _VL_ macros are defined. They could be added later if a need arises. Since use of DIV_ROUND_UP() was the only reason for including in guest.c, this patch also removes that #include. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 10 ++++++---- arch/arm64/include/uapi/asm/kvm.h | 5 +++++ arch/arm64/kvm/guest.c | 7 +++---- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 68509dee23e8..03df379a02b0 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2171,13 +2171,15 @@ and KVM_ARM_VCPU_FINALIZE for more information about this procedure. KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector lengths supported by the vcpu to be discovered and configured by userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and -encodes the set of vector lengths as follows: +or KVM_SET_ONE_REG, the value of this register is of type +__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as +follows: -__u64 vector_lengths[8]; +__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS]; if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX && - ((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1)) + ((vector_lengths[(vq - KVM_ARM64_SVE_VQ_MIN) / 64] >> + ((vq - KVM_ARM64_SVE_VQ_MIN) % 64)) & 1)) /* Vector length vq * 16 bytes supported */ else /* Vector length vq * 16 bytes not supported */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 2a04ef015469..edd2db8e5160 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -258,9 +258,14 @@ struct kvm_vcpu_events { KVM_REG_SIZE_U256 | \ ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) +#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SVE_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ KVM_REG_SIZE_U512 | 0xffff) +#define KVM_ARM64_SVE_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 73044e3f8706..5bb909c3ff7c 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -210,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) static bool vq_present( - const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)], + const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS], unsigned int vq) { return (*vqs)[vq_word(vq)] & vq_mask(vq); @@ -219,7 +218,7 @@ static bool vq_present( static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { unsigned int max_vq, vq; - u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; if (!vcpu_has_sve(vcpu)) return -ENOENT; @@ -243,7 +242,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { unsigned int max_vq, vq; - u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)]; + u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; if (!vcpu_has_sve(vcpu)) return -ENOENT; From patchwork Fri May 3 12:44:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7801C1390 for ; Fri, 3 May 2019 12:46:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 668A4204FD for ; Fri, 3 May 2019 12:46:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A9C728595; Fri, 3 May 2019 12:46:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01C0B204FD for ; Fri, 3 May 2019 12:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728050AbfECMq5 (ORCPT ); Fri, 3 May 2019 08:46:57 -0400 Received: from foss.arm.com ([217.140.101.70]:60810 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728053AbfECMq4 (ORCPT ); Fri, 3 May 2019 08:46:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E7411682; Fri, 3 May 2019 05:46:56 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E9E5C3F220; Fri, 3 May 2019 05:46:52 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 38/56] KVM: arm64/sve: Explain validity checks in set_sve_vls() Date: Fri, 3 May 2019 13:44:09 +0100 Message-Id: <20190503124427.190206-39-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Correct virtualization of SVE relies for correctness on code in set_sve_vls() that verifies consistency between the set of vector lengths requested by userspace and the set of vector lengths available on the host. However, the purpose of this code is not obvious, and not likely to be apparent at all to people who do not have detailed knowledge of the SVE system-level architecture. This patch adds a suitable comment to explain what these checks are for. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm64/kvm/guest.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5bb909c3ff7c..3ae2f82fca46 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -264,6 +264,13 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) return -EINVAL; + /* + * Vector lengths supported by the host can't currently be + * hidden from the guest individually: instead we can only set a + * maxmium via ZCR_EL2.LEN. So, make sure the available vector + * lengths match the set requested exactly up to the requested + * maximum: + */ for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) if (vq_present(&vqs, vq) != sve_vq_available(vq)) return -EINVAL; From patchwork Fri May 3 12:44:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C87591390 for ; Fri, 3 May 2019 12:47:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B71FA204FD for ; Fri, 3 May 2019 12:47:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB3C328595; Fri, 3 May 2019 12:47:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A9E2204FD for ; Fri, 3 May 2019 12:47:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728059AbfECMrA (ORCPT ); Fri, 3 May 2019 08:47:00 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60834 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727562AbfECMrA (ORCPT ); Fri, 3 May 2019 08:47:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A8F3F15AD; Fri, 3 May 2019 05:46:59 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 718F83F220; Fri, 3 May 2019 05:46:56 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 39/56] KVM: arm/arm64: Clean up vcpu finalization function parameter naming Date: Fri, 3 May 2019 13:44:10 +0100 Message-Id: <20190503124427.190206-40-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Currently, the internal vcpu finalization functions use a different name ("what") for the feature parameter than the name ("feature") used in the documentation. To avoid future confusion, this patch converts everything to use the name "feature" consistently. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/reset.c | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 7feddacbc207..fe7754315e9c 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -412,7 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) return 0; } -static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what) +static inline int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { return -EINVAL; } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 6adf08ba9277..7a096fdb333d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -627,7 +627,7 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); -int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what); +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_arm_vcpu_sve_finalized(vcpu) \ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 8847f389f56d..3402543fdcd3 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -186,9 +186,9 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) return 0; } -int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what) +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { - switch (what) { + switch (feature) { case KVM_ARM_VCPU_SVE: if (!vcpu_has_sve(vcpu)) return -EINVAL; From patchwork Fri May 3 12:44:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8EAE1398 for ; Fri, 3 May 2019 12:47:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97A4E204FD for ; Fri, 3 May 2019 12:47:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C26728595; Fri, 3 May 2019 12:47:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36433204FD for ; Fri, 3 May 2019 12:47:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728060AbfECMrE (ORCPT ); Fri, 3 May 2019 08:47:04 -0400 Received: from foss.arm.com ([217.140.101.70]:60858 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727403AbfECMrD (ORCPT ); Fri, 3 May 2019 08:47:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3047A15AD; Fri, 3 May 2019 05:47:03 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EBE293F220; Fri, 3 May 2019 05:46:59 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 40/56] KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE Date: Fri, 3 May 2019 13:44:11 +0100 Message-Id: <20190503124427.190206-41-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin Userspace is only supposed to use KVM_ARM_VCPU_FINALIZE when there is some vcpu feature that can actually be finalized. This means that documenting KVM_ARM_VCPU_FINALIZE as available or not depending on the capabilities present is not helpful. This patch amends the documentation to describe availability in terms of which capability is required for each finalizable feature instead. In any case, userspace sees the same error (EINVAL) regardless of whether the given feature is not present or KVM_ARM_VCPU_FINALIZE is not implemented at all. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 03df379a02b0..5519df0d3ed0 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -3999,17 +3999,16 @@ userspace should not expect to get any particular value there. 4.119 KVM_ARM_VCPU_FINALIZE -Capability: KVM_CAP_ARM_SVE Architectures: arm, arm64 Type: vcpu ioctl Parameters: int feature (in) Returns: 0 on success, -1 on error Errors: EPERM: feature not enabled, needs configuration, or already finalized - EINVAL: unknown feature + EINVAL: feature unknown or not present Recognised values for feature: - arm64 KVM_ARM_VCPU_SVE + arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) Finalizes the configuration of the specified vcpu feature. From patchwork Fri May 3 12:44:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928633 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06F481390 for ; Fri, 3 May 2019 12:47:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8707204FD for ; Fri, 3 May 2019 12:47:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DCC6028595; Fri, 3 May 2019 12:47:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 79282204FD for ; Fri, 3 May 2019 12:47:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728061AbfECMrH (ORCPT ); Fri, 3 May 2019 08:47:07 -0400 Received: from foss.arm.com ([217.140.101.70]:60878 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727587AbfECMrH (ORCPT ); Fri, 3 May 2019 08:47:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA8DE15AD; Fri, 3 May 2019 05:47:06 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 736623F220; Fri, 3 May 2019 05:47:03 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 41/56] KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation Date: Fri, 3 May 2019 13:44:12 +0100 Message-Id: <20190503124427.190206-42-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The current error code documentation for KVM_GET_ONE_REG and KVM_SET_ONE_REG could be read as implying that all architectures implement these error codes, or that KVM guarantees which error code is returned in a particular situation. Because this is not really the case, this patch waters down the documentation explicitly to remove such guarantees. EPERM is marked as arm64-specific, since for now arm64 really is the only architecture that yields this error code for the finalization-required case. Keeping this as a distinct error code is useful however for debugging due to the statefulness of the API in this instance. No functional change. Suggested-by: Andrew Jones Fixes: 395f562f2b4c ("KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG") Fixes: 50036ad06b7f ("KVM: arm64/sve: Document KVM API extensions for SVE") Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 5519df0d3ed0..818ac97fdabc 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1873,8 +1873,10 @@ Parameters: struct kvm_one_reg (in) Returns: 0 on success, negative value on failure Errors:  ENOENT:   no such register -  EPERM:    register access forbidden for architecture-dependent reasons -  EINVAL:   other errors, such as bad size encoding for a known register +  EINVAL:   invalid register ID, or no such register +  EPERM:    (arm64) register access not allowed before vcpu finalization +(These error codes are indicative only: do not rely on a specific error +code being returned in a specific situation.) struct kvm_one_reg { __u64 id; @@ -2260,10 +2262,12 @@ Architectures: all Type: vcpu ioctl Parameters: struct kvm_one_reg (in and out) Returns: 0 on success, negative value on failure -Errors: +Errors include:  ENOENT:   no such register -  EPERM:    register access forbidden for architecture-dependent reasons -  EINVAL:   other errors, such as bad size encoding for a known register +  EINVAL:   invalid register ID, or no such register +  EPERM:    (arm64) register access not allowed before vcpu finalization +(These error codes are indicative only: do not rely on a specific error +code being returned in a specific situation.) This ioctl allows to receive the value of a single register implemented in a vcpu. The register to read is indicated by the "id" field of the From patchwork Fri May 3 12:44:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 894691398 for ; Fri, 3 May 2019 12:47:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78E2F204FD for ; Fri, 3 May 2019 12:47:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C81328595; Fri, 3 May 2019 12:47:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 146CD204FD for ; Fri, 3 May 2019 12:47:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728065AbfECMrL (ORCPT ); Fri, 3 May 2019 08:47:11 -0400 Received: from foss.arm.com ([217.140.101.70]:60900 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728051AbfECMrK (ORCPT ); Fri, 3 May 2019 08:47:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30B9C15AD; Fri, 3 May 2019 05:47:10 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED90D3F220; Fri, 3 May 2019 05:47:06 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 42/56] KVM: arm64: Clarify access behaviour for out-of-range SVE register slice IDs Date: Fri, 3 May 2019 13:44:13 +0100 Message-Id: <20190503124427.190206-43-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Martin The existing documentation for which SVE register slice IDs are considered out-of-range, and what happens when userspace tries to access them, is cryptic. This patch rewords the text with the aim of making it a bit easier to understand. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 818ac97fdabc..e410a9f0f0d4 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2159,8 +2159,9 @@ arm64 SVE registers have the following bit patterns: 0x6050 0000 0015 060 FFR bits[256*slice + 255 : 256*slice] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-register -Access to slices beyond the maximum vector length configured for the -vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT. +Access to register IDs where 2048 * slice >= 128 * max_vq will fail with +ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit +quadwords: see (**) below. These registers are only accessible on vcpus for which SVE is enabled. See KVM_ARM_VCPU_INIT for details. From patchwork Fri May 3 12:44:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04B091390 for ; Fri, 3 May 2019 12:47:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6BD9204FD for ; Fri, 3 May 2019 12:47:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA5E128595; Fri, 3 May 2019 12:47:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7AAEF28334 for ; Fri, 3 May 2019 12:47:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728067AbfECMrO (ORCPT ); Fri, 3 May 2019 08:47:14 -0400 Received: from foss.arm.com ([217.140.101.70]:60918 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727733AbfECMrO (ORCPT ); Fri, 3 May 2019 08:47:14 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD16B15AD; Fri, 3 May 2019 05:47:13 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 73F7B3F220; Fri, 3 May 2019 05:47:10 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 43/56] KVM: arm64: Add a vcpu flag to control ptrauth for guest Date: Fri, 3 May 2019 13:44:14 +0100 Message-Id: <20190503124427.190206-44-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Amit Daniel Kachhap A per vcpu flag is added to check if pointer authentication is enabled for the vcpu or not. This flag may be enabled according to the necessary user policies and host capabilities. This patch also adds a helper to check the flag. Reviewed-by: Dave Martin Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7a096fdb333d..7ccac42a91a6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -355,10 +355,15 @@ struct kvm_vcpu_arch { #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ #define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ #define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ +#define KVM_ARM64_GUEST_HAS_PTRAUTH (1 << 7) /* PTRAUTH exposed to guest */ #define vcpu_has_sve(vcpu) (system_supports_sve() && \ ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) +#define vcpu_has_ptrauth(vcpu) ((system_supports_address_auth() || \ + system_supports_generic_auth()) && \ + ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)) + #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) /* From patchwork Fri May 3 12:44:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928641 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D19F81398 for ; Fri, 3 May 2019 12:47:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDB87204FD for ; Fri, 3 May 2019 12:47:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AEE2F28595; Fri, 3 May 2019 12:47:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 77066204FD for ; Fri, 3 May 2019 12:47:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728051AbfECMrS (ORCPT ); Fri, 3 May 2019 08:47:18 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60940 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726789AbfECMrS (ORCPT ); Fri, 3 May 2019 08:47:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 325DD15AD; Fri, 3 May 2019 05:47:17 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF55B3F220; Fri, 3 May 2019 05:47:13 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 44/56] KVM: arm/arm64: Context-switch ptrauth registers Date: Fri, 3 May 2019 13:44:15 +0100 Message-Id: <20190503124427.190206-45-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Rutland When pointer authentication is supported, a guest may wish to use it. This patch adds the necessary KVM infrastructure for this to work, with a semi-lazy context switch of the pointer auth state. Pointer authentication feature is only enabled when VHE is built in the kernel and present in the CPU implementation so only VHE code paths are modified. When we schedule a vcpu, we disable guest usage of pointer authentication instructions and accesses to the keys. While these are disabled, we avoid context-switching the keys. When we trap the guest trying to use pointer authentication functionality, we change to eagerly context-switching the keys, and enable the feature. The next time the vcpu is scheduled out/in, we start again. However the host key save is optimized and implemented inside ptrauth instruction/register access trap. Pointer authentication consists of address authentication and generic authentication, and CPUs in a system might have varied support for either. Where support for either feature is not uniform, it is hidden from guests via ID register emulation, as a result of the cpufeature framework in the host. Unfortunately, address authentication and generic authentication cannot be trapped separately, as the architecture provides a single EL2 trap covering both. If we wish to expose one without the other, we cannot prevent a (badly-written) guest from intermittently using a feature which is not uniformly supported (when scheduled on a physical CPU which supports the relevant feature). Hence, this patch expects both type of authentication to be present in a cpu. This switch of key is done from guest enter/exit assembly as preparation for the upcoming in-kernel pointer authentication support. Hence, these key switching routines are not implemented in C code as they may cause pointer authentication key signing error in some situations. Signed-off-by: Mark Rutland [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks , save host key in ptrauth exception trap] Signed-off-by: Amit Daniel Kachhap Reviewed-by: Julien Thierry Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu [maz: various fixups] Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_emulate.h | 2 + arch/arm64/Kconfig | 6 +- arch/arm64/include/asm/kvm_emulate.h | 16 ++++ arch/arm64/include/asm/kvm_host.h | 14 ++++ arch/arm64/include/asm/kvm_ptrauth.h | 111 +++++++++++++++++++++++++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kvm/handle_exit.c | 36 +++++++-- arch/arm64/kvm/hyp/entry.S | 15 ++++ arch/arm64/kvm/sys_regs.c | 50 ++++++++++-- virt/kvm/arm/arm.c | 2 + 10 files changed, 240 insertions(+), 18 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_ptrauth.h diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index 8927cae7c966..efb0e2c0d84c 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -343,4 +343,6 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, } } +static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {} + #endif /* __ARM_KVM_EMULATE_H__ */ diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7e34b9eba5de..39470784a50c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1288,6 +1288,7 @@ menu "ARMv8.3 architectural features" config ARM64_PTR_AUTH bool "Enable support for pointer authentication" default y + depends on !KVM || ARM64_VHE help Pointer authentication (part of the ARMv8.3 Extensions) provides instructions for signing and authenticating pointers against secret @@ -1301,8 +1302,9 @@ config ARM64_PTR_AUTH context-switched along with the process. The feature is detected at runtime. If the feature is not present in - hardware it will not be advertised to userspace nor will it be - enabled. + hardware it will not be advertised to userspace/KVM guest nor will it + be enabled. However, KVM guest also require VHE mode and hence + CONFIG_ARM64_VHE=y option to use this feature. endmenu diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d3842791e1c4..613427fafff9 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -98,6 +98,22 @@ static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TWE; } +static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); +} + +static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); +} + +static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) +{ + if (vcpu_has_ptrauth(vcpu)) + vcpu_ptrauth_disable(vcpu); +} + static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) { return vcpu->arch.vsesr_el2; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7ccac42a91a6..7eebea7059c6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -161,6 +161,18 @@ enum vcpu_sysreg { PMSWINC_EL0, /* Software Increment Register */ PMUSERENR_EL0, /* User Enable Register */ + /* Pointer Authentication Registers in a strict increasing order. */ + APIAKEYLO_EL1, + APIAKEYHI_EL1, + APIBKEYLO_EL1, + APIBKEYHI_EL1, + APDAKEYLO_EL1, + APDAKEYHI_EL1, + APDBKEYLO_EL1, + APDBKEYHI_EL1, + APGAKEYLO_EL1, + APGAKEYHI_EL1, + /* 32bit specific registers. Keep them at the end of the range */ DACR32_EL2, /* Domain Access Control Register */ IFSR32_EL2, /* Instruction Fault Status Register */ @@ -530,6 +542,8 @@ static inline bool kvm_arch_requires_vhe(void) return false; } +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu); + static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} diff --git a/arch/arm64/include/asm/kvm_ptrauth.h b/arch/arm64/include/asm/kvm_ptrauth.h new file mode 100644 index 000000000000..6301813dcace --- /dev/null +++ b/arch/arm64/include/asm/kvm_ptrauth.h @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* arch/arm64/include/asm/kvm_ptrauth.h: Guest/host ptrauth save/restore + * Copyright 2019 Arm Limited + * Authors: Mark Rutland + * Amit Daniel Kachhap + */ + +#ifndef __ASM_KVM_PTRAUTH_H +#define __ASM_KVM_PTRAUTH_H + +#ifdef __ASSEMBLY__ + +#include + +#ifdef CONFIG_ARM64_PTR_AUTH + +#define PTRAUTH_REG_OFFSET(x) (x - CPU_APIAKEYLO_EL1) + +/* + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp + * instruction so below macros takes CPU_APIAKEYLO_EL1 as base and + * calculates the offset of the keys from this base to avoid an extra add + * instruction. These macros assumes the keys offsets follow the order of + * the sysreg enum in kvm_host.h. + */ +.macro ptrauth_save_state base, reg1, reg2 + mrs_s \reg1, SYS_APIAKEYLO_EL1 + mrs_s \reg2, SYS_APIAKEYHI_EL1 + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)] + mrs_s \reg1, SYS_APIBKEYLO_EL1 + mrs_s \reg2, SYS_APIBKEYHI_EL1 + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)] + mrs_s \reg1, SYS_APDAKEYLO_EL1 + mrs_s \reg2, SYS_APDAKEYHI_EL1 + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)] + mrs_s \reg1, SYS_APDBKEYLO_EL1 + mrs_s \reg2, SYS_APDBKEYHI_EL1 + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)] + mrs_s \reg1, SYS_APGAKEYLO_EL1 + mrs_s \reg2, SYS_APGAKEYHI_EL1 + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)] +.endm + +.macro ptrauth_restore_state base, reg1, reg2 + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)] + msr_s SYS_APIAKEYLO_EL1, \reg1 + msr_s SYS_APIAKEYHI_EL1, \reg2 + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)] + msr_s SYS_APIBKEYLO_EL1, \reg1 + msr_s SYS_APIBKEYHI_EL1, \reg2 + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)] + msr_s SYS_APDAKEYLO_EL1, \reg1 + msr_s SYS_APDAKEYHI_EL1, \reg2 + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)] + msr_s SYS_APDBKEYLO_EL1, \reg1 + msr_s SYS_APDBKEYHI_EL1, \reg2 + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)] + msr_s SYS_APGAKEYLO_EL1, \reg1 + msr_s SYS_APGAKEYHI_EL1, \reg2 +.endm + +/* + * Both ptrauth_switch_to_guest and ptrauth_switch_to_host macros will + * check for the presence of one of the cpufeature flag + * ARM64_HAS_ADDRESS_AUTH_ARCH or ARM64_HAS_ADDRESS_AUTH_IMP_DEF and + * then proceed ahead with the save/restore of Pointer Authentication + * key registers. + */ +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3 +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH + b 1000f +alternative_else_nop_endif +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF + b 1001f +alternative_else_nop_endif +1000: + ldr \reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)] + and \reg1, \reg1, #(HCR_API | HCR_APK) + cbz \reg1, 1001f + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1 + ptrauth_restore_state \reg1, \reg2, \reg3 +1001: +.endm + +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3 +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH + b 2000f +alternative_else_nop_endif +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF + b 2001f +alternative_else_nop_endif +2000: + ldr \reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)] + and \reg1, \reg1, #(HCR_API | HCR_APK) + cbz \reg1, 2001f + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1 + ptrauth_save_state \reg1, \reg2, \reg3 + add \reg1, \h_ctxt, #CPU_APIAKEYLO_EL1 + ptrauth_restore_state \reg1, \reg2, \reg3 + isb +2001: +.endm + +#else /* !CONFIG_ARM64_PTR_AUTH */ +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3 +.endm +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3 +.endm +#endif /* CONFIG_ARM64_PTR_AUTH */ +#endif /* __ASSEMBLY__ */ +#endif /* __ASM_KVM_PTRAUTH_H */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7f40dcbdd51d..8178330a9f7a 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -125,7 +125,13 @@ int main(void) DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); + DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(CPU_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs)); + DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1])); + DEFINE(CPU_APIBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1])); + DEFINE(CPU_APDAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1])); + DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1])); + DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs)); DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); #endif diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 0b7983442071..516aead3c2a9 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -173,20 +173,40 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run) return 1; } +#define __ptrauth_save_key(regs, key) \ +({ \ + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ +}) + +/* + * Handle the guest trying to use a ptrauth instruction, or trying to access a + * ptrauth register. + */ +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *ctxt; + + if (vcpu_has_ptrauth(vcpu)) { + vcpu_ptrauth_enable(vcpu); + ctxt = vcpu->arch.host_cpu_context; + __ptrauth_save_key(ctxt->sys_regs, APIA); + __ptrauth_save_key(ctxt->sys_regs, APIB); + __ptrauth_save_key(ctxt->sys_regs, APDA); + __ptrauth_save_key(ctxt->sys_regs, APDB); + __ptrauth_save_key(ctxt->sys_regs, APGA); + } else { + kvm_inject_undefined(vcpu); + } +} + /* * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into * a NOP). */ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run) { - /* - * We don't currently support ptrauth in a guest, and we mask the ID - * registers to prevent well-behaved guests from trying to make use of - * it. - * - * Inject an UNDEF, as if the feature really isn't present. - */ - kvm_inject_undefined(vcpu); + kvm_arm_vcpu_ptrauth_trap(vcpu); return 1; } diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 675fdc186e3b..93ba3d7ef027 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -24,6 +24,7 @@ #include #include #include +#include #define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x) #define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x) @@ -64,6 +65,13 @@ ENTRY(__guest_enter) add x18, x0, #VCPU_CONTEXT + // Macro ptrauth_switch_to_guest format: + // ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3) + // The below macro to restore guest keys is not implemented in C code + // as it may cause Pointer Authentication key signing mismatch errors + // when this feature is enabled for kernel code. + ptrauth_switch_to_guest x18, x0, x1, x2 + // Restore guest regs x0-x17 ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)] ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)] @@ -118,6 +126,13 @@ ENTRY(__guest_exit) get_host_ctxt x2, x3 + // Macro ptrauth_switch_to_guest format: + // ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3) + // The below macro to save/restore keys is not implemented in C code + // as it may cause Pointer Authentication key signing mismatch errors + // when this feature is enabled for kernel code. + ptrauth_switch_to_host x1, x2, x3, x4, x5 + // Now restore the host regs restore_callee_saved_regs x2 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7046c7686321..12bd72e42b91 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1007,6 +1007,37 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } +static bool trap_ptrauth(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) +{ + kvm_arm_vcpu_ptrauth_trap(vcpu); + + /* + * Return false for both cases as we never skip the trapped + * instruction: + * + * - Either we re-execute the same key register access instruction + * after enabling ptrauth. + * - Or an UNDEF is injected as ptrauth is not supported/enabled. + */ + return false; +} + +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST; +} + +#define __PTRAUTH_KEY(k) \ + { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k, \ + .visibility = ptrauth_visibility} + +#define PTRAUTH_KEY(k) \ + __PTRAUTH_KEY(k ## KEYLO_EL1), \ + __PTRAUTH_KEY(k ## KEYHI_EL1) + static bool access_arch_timer(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1053,14 +1084,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) { val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); - } else if (id == SYS_ID_AA64ISAR1_EL1) { - const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) | - (0xfUL << ID_AA64ISAR1_API_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPI_SHIFT); - if (val & ptrauth_mask) - kvm_debug("ptrauth unsupported for guests, suppressing\n"); - val &= ~ptrauth_mask; + } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { + val &= ~(0xfUL << ID_AA64ISAR1_APA_SHIFT) | + (0xfUL << ID_AA64ISAR1_API_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPI_SHIFT); } return val; @@ -1460,6 +1488,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, + PTRAUTH_KEY(APIA), + PTRAUTH_KEY(APIB), + PTRAUTH_KEY(APDA), + PTRAUTH_KEY(APDB), + PTRAUTH_KEY(APGA), + { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 }, { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 }, { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 }, diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 7039c99cc217..156c09da9e2b 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu_clear_wfe_traps(vcpu); else vcpu_set_wfe_traps(vcpu); + + vcpu_ptrauth_setup_lazy(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) From patchwork Fri May 3 12:44:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928643 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29A871398 for ; Fri, 3 May 2019 12:47:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 167CB204FD for ; Fri, 3 May 2019 12:47:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A14B28595; Fri, 3 May 2019 12:47:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68892204FD for ; Fri, 3 May 2019 12:47:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728072AbfECMrV (ORCPT ); Fri, 3 May 2019 08:47:21 -0400 Received: from foss.arm.com ([217.140.101.70]:60962 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728068AbfECMrV (ORCPT ); Fri, 3 May 2019 08:47:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD860165C; Fri, 3 May 2019 05:47:20 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 75A0B3F220; Fri, 3 May 2019 05:47:17 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 45/56] KVM: arm64: Add userspace flag to enable pointer authentication Date: Fri, 3 May 2019 13:44:16 +0100 Message-Id: <20190503124427.190206-46-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Amit Daniel Kachhap Now that the building blocks of pointer authentication are present, lets add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer authentication for the KVM guest on a per-vcpu basis through the ioctl KVM_ARM_VCPU_INIT. This features will allow the KVM guest to allow the handling of pointer authentication instructions or to treat them as undefined if not set. Necessary documentations are added to reflect the changes done. Reviewed-by: Dave Martin Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Marc Zyngier --- .../arm64/pointer-authentication.txt | 22 ++++++++++++--- Documentation/virtual/kvm/api.txt | 10 +++++++ arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/uapi/asm/kvm.h | 2 ++ arch/arm64/kvm/reset.c | 27 +++++++++++++++++++ 5 files changed, 58 insertions(+), 5 deletions(-) diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt index 5baca42ba146..fc71b33de87e 100644 --- a/Documentation/arm64/pointer-authentication.txt +++ b/Documentation/arm64/pointer-authentication.txt @@ -87,7 +87,21 @@ used to get and set the keys for a thread. Virtualization -------------- -Pointer authentication is not currently supported in KVM guests. KVM -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of -the feature will result in an UNDEFINED exception being injected into -the guest. +Pointer authentication is enabled in KVM guest when each virtual cpu is +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and +requesting these two separate cpu features to be enabled. The current KVM +guest implementation works by enabling both features together, so both +these userspace flags are checked before enabling pointer authentication. +The separate userspace flag will allow to have no userspace ABI changes +if support is added in the future to allow these two features to be +enabled independently of one another. + +As Arm Architecture specifies that Pointer Authentication feature is +implemented along with the VHE feature so KVM arm64 ptrauth code relies +on VHE mode to be present. + +Additionally, when these vcpu feature flags are not set then KVM will +filter out the Pointer Authentication system key registers from +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID +register. Any attempt to use the Pointer Authentication instructions will +result in an UNDEFINED exception being injected into the guest. diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index e410a9f0f0d4..32afe7f5c35a 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2761,6 +2761,16 @@ Possible features: - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. Depends on KVM_CAP_ARM_PMU_V3. + - KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication + for arm64 only. + Both KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC + must be requested or neither must be requested. + + - KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication + for arm64 only. + Both KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC + must be requested or neither must be requested. + - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7eebea7059c6..f772ac2fb3e9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -49,7 +49,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -#define KVM_VCPU_MAX_FEATURES 5 +#define KVM_VCPU_MAX_FEATURES 7 #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index edd2db8e5160..7b7ac0f6cec9 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -104,6 +104,8 @@ struct kvm_regs { #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ #define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ +#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ struct kvm_vcpu_init { __u32 target; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 3402543fdcd3..028d0c604652 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -221,6 +221,27 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); } +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) +{ + /* Support ptrauth only if the system supports these capabilities. */ + if (!has_vhe()) + return -EINVAL; + + if (!system_supports_address_auth() || + !system_supports_generic_auth()) + return -EINVAL; + /* + * For now make sure that both address/generic pointer authentication + * features are requested by the userspace together. + */ + if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || + !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) + return -EINVAL; + + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + return 0; +} + /** * kvm_reset_vcpu - sets core registers and sys_regs to reset value * @vcpu: The VCPU pointer @@ -261,6 +282,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) kvm_vcpu_reset_sve(vcpu); } + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) { + if (kvm_vcpu_enable_ptrauth(vcpu)) + goto out; + } + switch (vcpu->arch.target) { default: if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { From patchwork Fri May 3 12:44:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928645 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD3131390 for ; Fri, 3 May 2019 12:47:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBBD028334 for ; Fri, 3 May 2019 12:47:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFA4E285BD; Fri, 3 May 2019 12:47:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3967D28334 for ; Fri, 3 May 2019 12:47:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728068AbfECMrZ (ORCPT ); Fri, 3 May 2019 08:47:25 -0400 Received: from foss.arm.com ([217.140.101.70]:60978 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728062AbfECMrY (ORCPT ); Fri, 3 May 2019 08:47:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3409C374; Fri, 3 May 2019 05:47:24 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F08423F220; Fri, 3 May 2019 05:47:20 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 46/56] KVM: arm64: Add capability to advertise ptrauth for guest Date: Fri, 3 May 2019 13:44:17 +0100 Message-Id: <20190503124427.190206-47-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Amit Daniel Kachhap This patch advertises the capability of two cpu feature called address pointer authentication and generic pointer authentication. These capabilities depend upon system support for pointer authentication and VHE mode. The current arm64 KVM partially implements pointer authentication and support of address/generic authentication are tied together. However, separate ABI requirements for both of them is added so that any future isolated implementation will not require any ABI changes. Signed-off-by: Amit Daniel Kachhap Cc: Mark Rutland Cc: Marc Zyngier Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 14 ++++++++++---- arch/arm64/kvm/reset.c | 5 +++++ include/uapi/linux/kvm.h | 2 ++ 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 32afe7f5c35a..fac1887f25b5 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2763,13 +2763,19 @@ Possible features: - KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication for arm64 only. - Both KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC - must be requested or neither must be requested. + Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS. + If KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC are + both present, then both KVM_ARM_VCPU_PTRAUTH_ADDRESS and + KVM_ARM_VCPU_PTRAUTH_GENERIC must be requested or neither must be + requested. - KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication for arm64 only. - Both KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC - must be requested or neither must be requested. + Depends on KVM_CAP_ARM_PTRAUTH_GENERIC. + If KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC are + both present, then both KVM_ARM_VCPU_PTRAUTH_ADDRESS and + KVM_ARM_VCPU_PTRAUTH_GENERIC must be requested or neither must be + requested. - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 028d0c604652..f0faf54f5857 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -101,6 +101,11 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SVE: r = system_supports_sve(); break; + case KVM_CAP_ARM_PTRAUTH_ADDRESS: + case KVM_CAP_ARM_PTRAUTH_GENERIC: + r = has_vhe() && system_supports_address_auth() && + system_supports_generic_auth(); + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 1d564445b515..4dc34f8e29f6 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -989,6 +989,8 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 #define KVM_CAP_HYPERV_CPUID 167 #define KVM_CAP_ARM_SVE 168 +#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169 +#define KVM_CAP_ARM_PTRAUTH_GENERIC 170 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Fri May 3 12:44:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A65121398 for ; Fri, 3 May 2019 12:47:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 94B4F204FD for ; Fri, 3 May 2019 12:47:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8890B28595; Fri, 3 May 2019 12:47:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3557E204FD for ; Fri, 3 May 2019 12:47:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728094AbfECMr3 (ORCPT ); Fri, 3 May 2019 08:47:29 -0400 Received: from foss.arm.com ([217.140.101.70]:60998 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727992AbfECMr2 (ORCPT ); Fri, 3 May 2019 08:47:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AFC11165C; Fri, 3 May 2019 05:47:27 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 771713F220; Fri, 3 May 2019 05:47:24 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 47/56] arm64: arm_pmu: Remove unnecessary isb instruction Date: Fri, 3 May 2019 13:44:18 +0100 Message-Id: <20190503124427.190206-48-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray The armv8pmu_enable_event_counter function issues an isb instruction after enabling a pair of counters - this doesn't provide any value and is inconsistent with the armv8pmu_disable_event_counter. In any case armv8pmu_enable_event_counter is always called with the PMU stopped. Starting the PMU with armv8pmu_start results in an isb instruction being issued prior to writing to PMCR_EL0. Let's remove the unnecessary isb instruction. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Acked-by: Mark Rutland Signed-off-by: Marc Zyngier --- arch/arm64/kernel/perf_event.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 4addb38bc250..cccf4fc86d67 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -533,7 +533,6 @@ static inline void armv8pmu_enable_event_counter(struct perf_event *event) armv8pmu_enable_counter(idx); if (armv8pmu_event_is_chained(event)) armv8pmu_enable_counter(idx - 1); - isb(); } static inline int armv8pmu_disable_counter(int idx) From patchwork Fri May 3 12:44:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4707D1390 for ; Fri, 3 May 2019 12:47:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 347F5204FD for ; Fri, 3 May 2019 12:47:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 285F028595; Fri, 3 May 2019 12:47:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9106328334 for ; Fri, 3 May 2019 12:47:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728102AbfECMre (ORCPT ); Fri, 3 May 2019 08:47:34 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32796 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728089AbfECMrb (ORCPT ); Fri, 3 May 2019 08:47:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3596880D; Fri, 3 May 2019 05:47:31 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F205A3F220; Fri, 3 May 2019 05:47:27 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 48/56] arm64: KVM: Encapsulate kvm_cpu_context in kvm_host_data Date: Fri, 3 May 2019 13:44:19 +0100 Message-Id: <20190503124427.190206-49-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray The virt/arm core allocates a kvm_cpu_context_t percpu, at present this is a typedef to kvm_cpu_context and is used to store host cpu context. The kvm_cpu_context structure is also used elsewhere to hold vcpu context. In order to use the percpu to hold additional future host information we encapsulate kvm_cpu_context in a new structure and rename the typedef and percpu to match. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 10 +++++++--- arch/arm64/include/asm/kvm_asm.h | 3 ++- arch/arm64/include/asm/kvm_host.h | 16 ++++++++++------ arch/arm64/kernel/asm-offsets.c | 1 + virt/kvm/arm/arm.c | 14 ++++++++------ 5 files changed, 28 insertions(+), 16 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index fe7754315e9c..2d721ab05925 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -153,9 +153,13 @@ struct kvm_cpu_context { u32 cp15[NR_CP15_REGS]; }; -typedef struct kvm_cpu_context kvm_cpu_context_t; +struct kvm_host_data { + struct kvm_cpu_context host_ctxt; +}; + +typedef struct kvm_host_data kvm_host_data_t; -static inline void kvm_init_host_cpu_context(kvm_cpu_context_t *cpu_ctxt, +static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt, int cpu) { /* The host's MPIDR is immutable, so let's set it up at boot time */ @@ -185,7 +189,7 @@ struct kvm_vcpu_arch { struct kvm_vcpu_fault_info fault; /* Host FP context */ - kvm_cpu_context_t *host_cpu_context; + struct kvm_cpu_context *host_cpu_context; /* VGIC state */ struct vgic_cpu vgic_cpu; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index f5b79e995f40..ff73f5462aca 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -108,7 +108,8 @@ extern u32 __kvm_get_mdcr_el2(void); .endm .macro get_host_ctxt reg, tmp - hyp_adr_this_cpu \reg, kvm_host_cpu_state, \tmp + hyp_adr_this_cpu \reg, kvm_host_data, \tmp + add \reg, \reg, #HOST_DATA_CONTEXT .endm .macro get_vcpu_ptr vcpu, ctxt diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f772ac2fb3e9..9ba59832b71a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -233,7 +233,11 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; -typedef struct kvm_cpu_context kvm_cpu_context_t; +struct kvm_host_data { + struct kvm_cpu_context host_ctxt; +}; + +typedef struct kvm_host_data kvm_host_data_t; struct vcpu_reset_state { unsigned long pc; @@ -278,7 +282,7 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch external_debug_state; /* Pointer to host CPU context */ - kvm_cpu_context_t *host_cpu_context; + struct kvm_cpu_context *host_cpu_context; struct thread_info *host_thread_info; /* hyp VA */ struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ @@ -483,9 +487,9 @@ void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); -DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); +DECLARE_PER_CPU(kvm_host_data_t, kvm_host_data); -static inline void kvm_init_host_cpu_context(kvm_cpu_context_t *cpu_ctxt, +static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt, int cpu) { /* The host's MPIDR is immutable, so let's set it up at boot time */ @@ -503,8 +507,8 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, * kernel's mapping to the linear mapping, and store it in tpidr_el2 * so that we can use adr_l to access per-cpu variables in EL2. */ - u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_cpu_state) - - (u64)kvm_ksym_ref(kvm_host_cpu_state)); + u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_data) - + (u64)kvm_ksym_ref(kvm_host_data)); /* * Call initialization code, and switch to the full blown HYP code. diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 8178330a9f7a..768b23101ff0 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -134,6 +134,7 @@ int main(void) DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs)); DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); + DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp)); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 156c09da9e2b..e960b91551d6 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -56,7 +56,7 @@ __asm__(".arch_extension virt"); #endif -DEFINE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); +DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); /* Per-CPU variable containing the currently running vcpu. */ @@ -360,8 +360,10 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { int *last_ran; + kvm_host_data_t *cpu_data; last_ran = this_cpu_ptr(vcpu->kvm->arch.last_vcpu_ran); + cpu_data = this_cpu_ptr(&kvm_host_data); /* * We might get preempted before the vCPU actually runs, but @@ -373,7 +375,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } vcpu->cpu = cpu; - vcpu->arch.host_cpu_context = this_cpu_ptr(&kvm_host_cpu_state); + vcpu->arch.host_cpu_context = &cpu_data->host_ctxt; kvm_arm_set_running_vcpu(vcpu); kvm_vgic_load(vcpu); @@ -1569,11 +1571,11 @@ static int init_hyp_mode(void) } for_each_possible_cpu(cpu) { - kvm_cpu_context_t *cpu_ctxt; + kvm_host_data_t *cpu_data; - cpu_ctxt = per_cpu_ptr(&kvm_host_cpu_state, cpu); - kvm_init_host_cpu_context(cpu_ctxt, cpu); - err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1, PAGE_HYP); + cpu_data = per_cpu_ptr(&kvm_host_data, cpu); + kvm_init_host_cpu_context(&cpu_data->host_ctxt, cpu); + err = create_hyp_mappings(cpu_data, cpu_data + 1, PAGE_HYP); if (err) { kvm_err("Cannot map host CPU state: %d\n", err); From patchwork Fri May 3 12:44:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928651 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA9DC1398 for ; Fri, 3 May 2019 12:47:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA813204FD for ; Fri, 3 May 2019 12:47:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE20F285BD; Fri, 3 May 2019 12:47:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45661204FD for ; Fri, 3 May 2019 12:47:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728104AbfECMrf (ORCPT ); Fri, 3 May 2019 08:47:35 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32814 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728103AbfECMrf (ORCPT ); Fri, 3 May 2019 08:47:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B0D4B165C; Fri, 3 May 2019 05:47:34 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 78DC23F220; Fri, 3 May 2019 05:47:31 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 49/56] arm64: KVM: Add accessors to track guest/host only counters Date: Fri, 3 May 2019 13:44:20 +0100 Message-Id: <20190503124427.190206-50-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray In order to effeciently switch events_{guest,host} perf counters at guest entry/exit we add bitfields to kvm_cpu_context for guest and host events as well as accessors for updating them. A function is also provided which allows the PMU driver to determine if a counter should start counting when it is enabled. With exclude_host, we may only start counting when entering the guest. Signed-off-by: Andrew Murray Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 17 ++++++++++++ arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/pmu.c | 45 +++++++++++++++++++++++++++++++ 3 files changed, 63 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/pmu.c diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9ba59832b71a..655ad08edc3a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -233,8 +233,14 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; +struct kvm_pmu_events { + u32 events_host; + u32 events_guest; +}; + struct kvm_host_data { struct kvm_cpu_context host_ctxt; + struct kvm_pmu_events pmu_events; }; typedef struct kvm_host_data kvm_host_data_t; @@ -572,11 +578,22 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); +static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) +{ + return attr->exclude_host; +} + #ifdef CONFIG_KVM /* Avoid conflicts with core headers if CONFIG_KVM=n */ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) { return kvm_arch_vcpu_run_map_fp(vcpu); } + +void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); +void kvm_clr_pmu_events(u32 clr); +#else +static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} +static inline void kvm_clr_pmu_events(u32 clr) {} #endif static inline void kvm_arm_vhe_guest_enter(void) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 690e033a91c0..3ac1a64d2fb9 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -17,7 +17,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o fpsimd.o +kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o fpsimd.o pmu.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/aarch32.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c new file mode 100644 index 000000000000..5414b134f99a --- /dev/null +++ b/arch/arm64/kvm/pmu.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Arm Limited + * Author: Andrew Murray + */ +#include +#include + +/* + * Given the exclude_{host,guest} attributes, determine if we are going + * to need to switch counters at guest entry/exit. + */ +static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) +{ + /* Only switch if attributes are different */ + return (attr->exclude_host != attr->exclude_guest); +} + +/* + * Add events to track that we may want to switch at guest entry/exit + * time. + */ +void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) +{ + struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data); + + if (!kvm_pmu_switch_needed(attr)) + return; + + if (!attr->exclude_host) + ctx->pmu_events.events_host |= set; + if (!attr->exclude_guest) + ctx->pmu_events.events_guest |= set; +} + +/* + * Stop tracking events + */ +void kvm_clr_pmu_events(u32 clr) +{ + struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data); + + ctx->pmu_events.events_host &= ~clr; + ctx->pmu_events.events_guest &= ~clr; +} From patchwork Fri May 3 12:44:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB29D1390 for ; Fri, 3 May 2019 12:47:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 98DE3204FD for ; Fri, 3 May 2019 12:47:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8CB1928595; Fri, 3 May 2019 12:47:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CC20204FD for ; Fri, 3 May 2019 12:47:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728077AbfECMrk (ORCPT ); Fri, 3 May 2019 08:47:40 -0400 Received: from foss.arm.com ([217.140.101.70]:32840 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728105AbfECMri (ORCPT ); Fri, 3 May 2019 08:47:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386BB15AD; Fri, 3 May 2019 05:47:38 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 004F33F220; Fri, 3 May 2019 05:47:34 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 50/56] arm64: arm_pmu: Add !VHE support for exclude_host/exclude_guest attributes Date: Fri, 3 May 2019 13:44:21 +0100 Message-Id: <20190503124427.190206-51-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray Add support for the :G and :H attributes in perf by handling the exclude_host/exclude_guest event attributes. We notify KVM of counters that we wish to be enabled or disabled on guest entry/exit and thus defer from starting or stopping events based on their event attributes. With !VHE we switch the counters between host/guest at EL2. We are able to eliminate counters counting host events on the boundaries of guest entry/exit when using :G by filtering out EL2 for exclude_host. When using !exclude_hv there is a small blackout window at the guest entry/exit where host events are not captured. Signed-off-by: Andrew Murray Acked-by: Will Deacon Signed-off-by: Marc Zyngier --- arch/arm64/kernel/perf_event.c | 43 ++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index cccf4fc86d67..6bb28aaf5aea 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -528,11 +529,21 @@ static inline int armv8pmu_enable_counter(int idx) static inline void armv8pmu_enable_event_counter(struct perf_event *event) { + struct perf_event_attr *attr = &event->attr; int idx = event->hw.idx; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); - armv8pmu_enable_counter(idx); if (armv8pmu_event_is_chained(event)) - armv8pmu_enable_counter(idx - 1); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + kvm_set_pmu_events(counter_bits, attr); + + /* We rely on the hypervisor switch code to enable guest counters */ + if (!kvm_pmu_counter_deferred(attr)) { + armv8pmu_enable_counter(idx); + if (armv8pmu_event_is_chained(event)) + armv8pmu_enable_counter(idx - 1); + } } static inline int armv8pmu_disable_counter(int idx) @@ -545,11 +556,21 @@ static inline int armv8pmu_disable_counter(int idx) static inline void armv8pmu_disable_event_counter(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; + struct perf_event_attr *attr = &event->attr; int idx = hwc->idx; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); if (armv8pmu_event_is_chained(event)) - armv8pmu_disable_counter(idx - 1); - armv8pmu_disable_counter(idx); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + kvm_clr_pmu_events(counter_bits); + + /* We rely on the hypervisor switch code to disable guest counters */ + if (!kvm_pmu_counter_deferred(attr)) { + if (armv8pmu_event_is_chained(event)) + armv8pmu_disable_counter(idx - 1); + armv8pmu_disable_counter(idx); + } } static inline int armv8pmu_enable_intens(int idx) @@ -829,11 +850,16 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, if (!attr->exclude_kernel) config_base |= ARMV8_PMU_INCLUDE_EL2; } else { - if (attr->exclude_kernel) - config_base |= ARMV8_PMU_EXCLUDE_EL1; - if (!attr->exclude_hv) + if (!attr->exclude_hv && !attr->exclude_host) config_base |= ARMV8_PMU_INCLUDE_EL2; } + + /* + * Filter out !VHE kernels and guest kernels + */ + if (attr->exclude_kernel) + config_base |= ARMV8_PMU_EXCLUDE_EL1; + if (attr->exclude_user) config_base |= ARMV8_PMU_EXCLUDE_EL0; @@ -863,6 +889,9 @@ static void armv8pmu_reset(void *info) armv8pmu_disable_intens(idx); } + /* Clear the counters we flip at guest entry/exit */ + kvm_clr_pmu_events(U32_MAX); + /* * Initialize & Reset PMNC. Request overflow interrupt for * 64 bit cycle counter but cheat in armv8pmu_write_counter(). From patchwork Fri May 3 12:44:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC5261390 for ; Fri, 3 May 2019 12:47:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99986204FD for ; Fri, 3 May 2019 12:47:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B43F28595; Fri, 3 May 2019 12:47:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A53C204FD for ; Fri, 3 May 2019 12:47:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728120AbfECMrn (ORCPT ); Fri, 3 May 2019 08:47:43 -0400 Received: from foss.arm.com ([217.140.101.70]:32866 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728115AbfECMrm (ORCPT ); Fri, 3 May 2019 08:47:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B3A4915AD; Fri, 3 May 2019 05:47:41 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B9D33F220; Fri, 3 May 2019 05:47:38 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 51/56] arm64: KVM: Enable !VHE support for :G/:H perf event modifiers Date: Fri, 3 May 2019 13:44:22 +0100 Message-Id: <20190503124427.190206-52-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray Enable/disable event counters as appropriate when entering and exiting the guest to enable support for guest or host only event counting. For both VHE and non-VHE we switch the counters between host/guest at EL2. The PMU may be on when we change which counters are enabled however we avoid adding an isb as we instead rely on existing context synchronisation events: the eret to enter the guest (__guest_enter) and eret in kvm_call_hyp for __kvm_vcpu_run_nvhe on returning. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/hyp/switch.c | 6 +++++ arch/arm64/kvm/pmu.c | 39 +++++++++++++++++++++++++++++++ 3 files changed, 48 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 655ad08edc3a..645d74c705d6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -591,6 +591,9 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u32 clr); + +void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt); +bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt); #else static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u32 clr) {} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 5444b9c6fb5c..22b4c335e0b2 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -566,6 +566,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; u64 exit_code; /* @@ -585,6 +586,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + __sysreg_save_state_nvhe(host_ctxt); __activate_vm(kern_hyp_va(vcpu->kvm)); @@ -631,6 +634,9 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) */ __debug_switch_to_host(vcpu); + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + /* Returning to host will clear PSR.I, remask PMR if needed */ if (system_uses_irq_prio_masking()) gic_write_pmr(GIC_PRIO_IRQOFF); diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 5414b134f99a..599e6d3f692e 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -5,6 +5,7 @@ */ #include #include +#include /* * Given the exclude_{host,guest} attributes, determine if we are going @@ -43,3 +44,41 @@ void kvm_clr_pmu_events(u32 clr) ctx->pmu_events.events_host &= ~clr; ctx->pmu_events.events_guest &= ~clr; } + +/** + * Disable host events, enable guest events + */ +bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenclr_el0); + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenset_el0); + + return (pmu->events_host || pmu->events_guest); +} + +/** + * Disable guest events, enable host events + */ +void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenclr_el0); + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenset_el0); +} From patchwork Fri May 3 12:44:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FC861398 for ; Fri, 3 May 2019 12:47:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D0A8204FD for ; Fri, 3 May 2019 12:47:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DAB328595; Fri, 3 May 2019 12:47:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71C6F204FD for ; Fri, 3 May 2019 12:47:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728107AbfECMrq (ORCPT ); Fri, 3 May 2019 08:47:46 -0400 Received: from foss.arm.com ([217.140.101.70]:32886 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727657AbfECMrp (ORCPT ); Fri, 3 May 2019 08:47:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3A73715AD; Fri, 3 May 2019 05:47:45 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 02B333F220; Fri, 3 May 2019 05:47:41 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 52/56] arm64: KVM: Enable VHE support for :G/:H perf event modifiers Date: Fri, 3 May 2019 13:44:23 +0100 Message-Id: <20190503124427.190206-53-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray With VHE different exception levels are used between the host (EL2) and guest (EL1) with a shared exception level for userpace (EL0). We can take advantage of this and use the PMU's exception level filtering to avoid enabling/disabling counters in the world-switch code. Instead we just modify the counter type to include or exclude EL0 at vcpu_{load,put} time. We also ensure that trapped PMU system register writes do not re-enable EL0 when reconfiguring the backing perf events. This approach completely avoids blackout windows seen with !VHE. Suggested-by: Christoffer Dall Signed-off-by: Andrew Murray Acked-by: Will Deacon Reviewed-by: Suzuki K Poulose Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 3 ++ arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/kernel/perf_event.c | 6 ++- arch/arm64/kvm/pmu.c | 88 ++++++++++++++++++++++++++++++- arch/arm64/kvm/sys_regs.c | 3 ++ virt/kvm/arm/arm.c | 2 + 6 files changed, 103 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 2d721ab05925..075e1921fdd9 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -368,6 +368,9 @@ static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {} +static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} +static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} + static inline void kvm_arm_vhe_guest_enter(void) {} static inline void kvm_arm_vhe_guest_exit(void) {} diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 645d74c705d6..2a8d3f8ca22c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -580,7 +580,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) { - return attr->exclude_host; + return (!has_vhe() && attr->exclude_host); } #ifdef CONFIG_KVM /* Avoid conflicts with core headers if CONFIG_KVM=n */ @@ -594,6 +594,9 @@ void kvm_clr_pmu_events(u32 clr); void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt); bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt); + +void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); +void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); #else static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u32 clr) {} diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 6bb28aaf5aea..314b1adedf06 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -847,8 +847,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, * with other architectures (x86 and Power). */ if (is_kernel_in_hyp_mode()) { - if (!attr->exclude_kernel) + if (!attr->exclude_kernel && !attr->exclude_host) config_base |= ARMV8_PMU_INCLUDE_EL2; + if (attr->exclude_guest) + config_base |= ARMV8_PMU_EXCLUDE_EL1; + if (attr->exclude_host) + config_base |= ARMV8_PMU_EXCLUDE_EL0; } else { if (!attr->exclude_hv && !attr->exclude_host) config_base |= ARMV8_PMU_INCLUDE_EL2; diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 599e6d3f692e..3f99a095a1ff 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -8,11 +8,19 @@ #include /* - * Given the exclude_{host,guest} attributes, determine if we are going - * to need to switch counters at guest entry/exit. + * Given the perf event attributes and system type, determine + * if we are going to need to switch counters at guest entry/exit. */ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) { + /** + * With VHE the guest kernel runs at EL1 and the host at EL2, + * where user (EL0) is excluded then we have no reason to switch + * counters. + */ + if (has_vhe() && attr->exclude_user) + return false; + /* Only switch if attributes are different */ return (attr->exclude_host != attr->exclude_guest); } @@ -82,3 +90,79 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) if (pmu->events_host) write_sysreg(pmu->events_host, pmcntenset_el0); } + +/* + * Modify ARMv8 PMU events to include EL0 counting + */ +static void kvm_vcpu_pmu_enable_el0(unsigned long events) +{ + u64 typer; + u32 counter; + + for_each_set_bit(counter, &events, 32) { + write_sysreg(counter, pmselr_el0); + isb(); + typer = read_sysreg(pmxevtyper_el0) & ~ARMV8_PMU_EXCLUDE_EL0; + write_sysreg(typer, pmxevtyper_el0); + isb(); + } +} + +/* + * Modify ARMv8 PMU events to exclude EL0 counting + */ +static void kvm_vcpu_pmu_disable_el0(unsigned long events) +{ + u64 typer; + u32 counter; + + for_each_set_bit(counter, &events, 32) { + write_sysreg(counter, pmselr_el0); + isb(); + typer = read_sysreg(pmxevtyper_el0) | ARMV8_PMU_EXCLUDE_EL0; + write_sysreg(typer, pmxevtyper_el0); + isb(); + } +} + +/* + * On VHE ensure that only guest events have EL0 counting enabled + */ +void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_host_data *host; + u32 events_guest, events_host; + + if (!has_vhe()) + return; + + host_ctxt = vcpu->arch.host_cpu_context; + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + events_guest = host->pmu_events.events_guest; + events_host = host->pmu_events.events_host; + + kvm_vcpu_pmu_enable_el0(events_guest); + kvm_vcpu_pmu_disable_el0(events_host); +} + +/* + * On VHE ensure that only host events have EL0 counting enabled + */ +void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_host_data *host; + u32 events_guest, events_host; + + if (!has_vhe()) + return; + + host_ctxt = vcpu->arch.host_cpu_context; + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + events_guest = host->pmu_events.events_guest; + events_host = host->pmu_events.events_host; + + kvm_vcpu_pmu_enable_el0(events_host); + kvm_vcpu_pmu_disable_el0(events_guest); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 12bd72e42b91..9d02643bc601 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -695,6 +695,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, val |= p->regval & ARMV8_PMU_PMCR_MASK; __vcpu_sys_reg(vcpu, PMCR_EL0) = val; kvm_pmu_handle_pmcr(vcpu, val); + kvm_vcpu_pmu_restore_guest(vcpu); } else { /* PMCR.P & PMCR.C are RAZ */ val = __vcpu_sys_reg(vcpu, PMCR_EL0) @@ -850,6 +851,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); __vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK; + kvm_vcpu_pmu_restore_guest(vcpu); } else { p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; } @@ -875,6 +877,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, /* accessing PMCNTENSET_EL0 */ __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; kvm_pmu_enable_counter(vcpu, val); + kvm_vcpu_pmu_restore_guest(vcpu); } else { /* accessing PMCNTENCLR_EL0 */ __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index e960b91551d6..8b7ca101f0f7 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -382,6 +382,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_timer_vcpu_load(vcpu); kvm_vcpu_load_sysregs(vcpu); kvm_arch_vcpu_load_fp(vcpu); + kvm_vcpu_pmu_restore_guest(vcpu); if (single_task_running()) vcpu_clear_wfe_traps(vcpu); @@ -397,6 +398,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_vcpu_put_sysregs(vcpu); kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); + kvm_vcpu_pmu_restore_host(vcpu); vcpu->cpu = -1; From patchwork Fri May 3 12:44:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928669 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 040201390 for ; Fri, 3 May 2019 12:48:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5066204FD for ; Fri, 3 May 2019 12:48:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D916528595; Fri, 3 May 2019 12:48:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 646DE204FD for ; Fri, 3 May 2019 12:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727617AbfECMrt (ORCPT ); Fri, 3 May 2019 08:47:49 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32906 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728121AbfECMrt (ORCPT ); Fri, 3 May 2019 08:47:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B50C21682; Fri, 3 May 2019 05:47:48 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D82E3F220; Fri, 3 May 2019 05:47:45 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 53/56] arm64: KVM: Avoid isb's by using direct pmxevtyper sysreg Date: Fri, 3 May 2019 13:44:24 +0100 Message-Id: <20190503124427.190206-54-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray Upon entering or exiting a guest we may modify multiple PMU counters to enable of disable EL0 filtering. We presently do this via the indirect PMXEVTYPER_EL0 system register (where the counter we modify is selected by PMSELR). With this approach it is necessary to order the writes via isb instructions such that we select the correct counter before modifying it. Let's avoid potentially expensive instruction barriers by using the direct PMEVTYPER_EL0 registers instead. As the change to counter type relates only to EL0 filtering we can rely on the implicit instruction barrier which occurs when we transition from EL2 to EL1 on entering the guest. On returning to userspace we can, at the latest, rely on the implicit barrier between EL2 and EL0. We can also depend on the explicit isb in armv8pmu_select_counter to order our write against any other kernel changes by the PMU driver to the type register as a result of preemption. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Signed-off-by: Marc Zyngier --- arch/arm64/kvm/pmu.c | 84 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 74 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 3f99a095a1ff..cd49db845ef4 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -91,6 +91,74 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +#define PMEVTYPER_READ_CASE(idx) \ + case idx: \ + return read_sysreg(pmevtyper##idx##_el0) + +#define PMEVTYPER_WRITE_CASE(idx) \ + case idx: \ + write_sysreg(val, pmevtyper##idx##_el0); \ + break + +#define PMEVTYPER_CASES(readwrite) \ + PMEVTYPER_##readwrite##_CASE(0); \ + PMEVTYPER_##readwrite##_CASE(1); \ + PMEVTYPER_##readwrite##_CASE(2); \ + PMEVTYPER_##readwrite##_CASE(3); \ + PMEVTYPER_##readwrite##_CASE(4); \ + PMEVTYPER_##readwrite##_CASE(5); \ + PMEVTYPER_##readwrite##_CASE(6); \ + PMEVTYPER_##readwrite##_CASE(7); \ + PMEVTYPER_##readwrite##_CASE(8); \ + PMEVTYPER_##readwrite##_CASE(9); \ + PMEVTYPER_##readwrite##_CASE(10); \ + PMEVTYPER_##readwrite##_CASE(11); \ + PMEVTYPER_##readwrite##_CASE(12); \ + PMEVTYPER_##readwrite##_CASE(13); \ + PMEVTYPER_##readwrite##_CASE(14); \ + PMEVTYPER_##readwrite##_CASE(15); \ + PMEVTYPER_##readwrite##_CASE(16); \ + PMEVTYPER_##readwrite##_CASE(17); \ + PMEVTYPER_##readwrite##_CASE(18); \ + PMEVTYPER_##readwrite##_CASE(19); \ + PMEVTYPER_##readwrite##_CASE(20); \ + PMEVTYPER_##readwrite##_CASE(21); \ + PMEVTYPER_##readwrite##_CASE(22); \ + PMEVTYPER_##readwrite##_CASE(23); \ + PMEVTYPER_##readwrite##_CASE(24); \ + PMEVTYPER_##readwrite##_CASE(25); \ + PMEVTYPER_##readwrite##_CASE(26); \ + PMEVTYPER_##readwrite##_CASE(27); \ + PMEVTYPER_##readwrite##_CASE(28); \ + PMEVTYPER_##readwrite##_CASE(29); \ + PMEVTYPER_##readwrite##_CASE(30) + +/* + * Read a value direct from PMEVTYPER + */ +static u64 kvm_vcpu_pmu_read_evtype_direct(int idx) +{ + switch (idx) { + PMEVTYPER_CASES(READ); + default: + WARN_ON(1); + } + + return 0; +} + +/* + * Write a value direct to PMEVTYPER + */ +static void kvm_vcpu_pmu_write_evtype_direct(int idx, u32 val) +{ + switch (idx) { + PMEVTYPER_CASES(WRITE); + default: + WARN_ON(1); + } +} + /* * Modify ARMv8 PMU events to include EL0 counting */ @@ -100,11 +168,9 @@ static void kvm_vcpu_pmu_enable_el0(unsigned long events) u32 counter; for_each_set_bit(counter, &events, 32) { - write_sysreg(counter, pmselr_el0); - isb(); - typer = read_sysreg(pmxevtyper_el0) & ~ARMV8_PMU_EXCLUDE_EL0; - write_sysreg(typer, pmxevtyper_el0); - isb(); + typer = kvm_vcpu_pmu_read_evtype_direct(counter); + typer &= ~ARMV8_PMU_EXCLUDE_EL0; + kvm_vcpu_pmu_write_evtype_direct(counter, typer); } } @@ -117,11 +183,9 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events) u32 counter; for_each_set_bit(counter, &events, 32) { - write_sysreg(counter, pmselr_el0); - isb(); - typer = read_sysreg(pmxevtyper_el0) | ARMV8_PMU_EXCLUDE_EL0; - write_sysreg(typer, pmxevtyper_el0); - isb(); + typer = kvm_vcpu_pmu_read_evtype_direct(counter); + typer |= ARMV8_PMU_EXCLUDE_EL0; + kvm_vcpu_pmu_write_evtype_direct(counter, typer); } } From patchwork Fri May 3 12:44:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 389BD1390 for ; Fri, 3 May 2019 12:47:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2719B28334 for ; Fri, 3 May 2019 12:47:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B455285BD; Fri, 3 May 2019 12:47:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D2A328334 for ; Fri, 3 May 2019 12:47:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728127AbfECMrx (ORCPT ); Fri, 3 May 2019 08:47:53 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32938 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728062AbfECMrw (ORCPT ); Fri, 3 May 2019 08:47:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B2781684; Fri, 3 May 2019 05:47:52 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 03E6C3F220; Fri, 3 May 2019 05:47:48 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 54/56] arm64: docs: Document perf event attributes Date: Fri, 3 May 2019 13:44:25 +0100 Message-Id: <20190503124427.190206-55-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray The interaction between the exclude_{host,guest} flags, exclude_{user,kernel,hv} flags and presence of VHE can result in different exception levels being filtered by the ARMv8 PMU. As this can be confusing let's document how they work on arm64. Signed-off-by: Andrew Murray Reviewed-by: Suzuki K Poulose Acked-by: Will Deacon Signed-off-by: Marc Zyngier --- Documentation/arm64/perf.txt | 85 ++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 Documentation/arm64/perf.txt diff --git a/Documentation/arm64/perf.txt b/Documentation/arm64/perf.txt new file mode 100644 index 000000000000..0d6a7d87d49e --- /dev/null +++ b/Documentation/arm64/perf.txt @@ -0,0 +1,85 @@ +Perf Event Attributes +===================== + +Author: Andrew Murray +Date: 2019-03-06 + +exclude_user +------------ + +This attribute excludes userspace. + +Userspace always runs at EL0 and thus this attribute will exclude EL0. + + +exclude_kernel +-------------- + +This attribute excludes the kernel. + +The kernel runs at EL2 with VHE and EL1 without. Guest kernels always run +at EL1. + +For the host this attribute will exclude EL1 and additionally EL2 on a VHE +system. + +For the guest this attribute will exclude EL1. Please note that EL2 is +never counted within a guest. + + +exclude_hv +---------- + +This attribute excludes the hypervisor. + +For a VHE host this attribute is ignored as we consider the host kernel to +be the hypervisor. + +For a non-VHE host this attribute will exclude EL2 as we consider the +hypervisor to be any code that runs at EL2 which is predominantly used for +guest/host transitions. + +For the guest this attribute has no effect. Please note that EL2 is +never counted within a guest. + + +exclude_host / exclude_guest +---------------------------- + +These attributes exclude the KVM host and guest, respectively. + +The KVM host may run at EL0 (userspace), EL1 (non-VHE kernel) and EL2 (VHE +kernel or non-VHE hypervisor). + +The KVM guest may run at EL0 (userspace) and EL1 (kernel). + +Due to the overlapping exception levels between host and guests we cannot +exclusively rely on the PMU's hardware exception filtering - therefore we +must enable/disable counting on the entry and exit to the guest. This is +performed differently on VHE and non-VHE systems. + +For non-VHE systems we exclude EL2 for exclude_host - upon entering and +exiting the guest we disable/enable the event as appropriate based on the +exclude_host and exclude_guest attributes. + +For VHE systems we exclude EL1 for exclude_guest and exclude both EL0,EL2 +for exclude_host. Upon entering and exiting the guest we modify the event +to include/exclude EL0 as appropriate based on the exclude_host and +exclude_guest attributes. + +The statements above also apply when these attributes are used within a +non-VHE guest however please note that EL2 is never counted within a guest. + + +Accuracy +-------- + +On non-VHE hosts we enable/disable counters on the entry/exit of host/guest +transition at EL2 - however there is a period of time between +enabling/disabling the counters and entering/exiting the guest. We are +able to eliminate counters counting host events on the boundaries of guest +entry/exit when counting guest events by filtering out EL2 for +exclude_host. However when using !exclude_hv there is a small blackout +window at the guest entry/exit where host events are not captured. + +On VHE systems there are no blackout windows. From patchwork Fri May 3 12:44:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 052E41390 for ; Fri, 3 May 2019 12:47:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E82D128334 for ; Fri, 3 May 2019 12:47:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DC390285BD; Fri, 3 May 2019 12:47:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7267028334 for ; Fri, 3 May 2019 12:47:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728115AbfECMr4 (ORCPT ); Fri, 3 May 2019 08:47:56 -0400 Received: from foss.arm.com ([217.140.101.70]:32962 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728114AbfECMr4 (ORCPT ); Fri, 3 May 2019 08:47:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B596B169E; Fri, 3 May 2019 05:47:55 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7DF413F220; Fri, 3 May 2019 05:47:52 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 55/56] arm64: KVM: Fix perf cycle counter support for VHE Date: Fri, 3 May 2019 13:44:26 +0100 Message-Id: <20190503124427.190206-56-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Murray The kvm_vcpu_pmu_{read,write}_evtype_direct functions do not handle the cycle counter use-case, this leads to inaccurate counts and a WARN message when using perf with the cycle counter (-e cycle). Let's fix this by adding a use case for pmccfiltr_el0. Fixes: 39e3406a090a ("arm64: KVM: Avoid isb's by using direct pmxevtyper sysreg") Reported-by: Suzuki K Poulose Signed-off-by: Andrew Murray Signed-off-by: Marc Zyngier --- arch/arm64/kvm/pmu.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index cd49db845ef4..3da94a5bb6b7 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -134,12 +134,15 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) PMEVTYPER_##readwrite##_CASE(30) /* - * Read a value direct from PMEVTYPER + * Read a value direct from PMEVTYPER where idx is 0-30 + * or PMCCFILTR_EL0 where idx is ARMV8_PMU_CYCLE_IDX (31). */ static u64 kvm_vcpu_pmu_read_evtype_direct(int idx) { switch (idx) { PMEVTYPER_CASES(READ); + case ARMV8_PMU_CYCLE_IDX: + return read_sysreg(pmccfiltr_el0); default: WARN_ON(1); } @@ -148,12 +151,16 @@ static u64 kvm_vcpu_pmu_read_evtype_direct(int idx) } /* - * Write a value direct to PMEVTYPER + * Write a value direct to PMEVTYPER where idx is 0-30 + * or PMCCFILTR_EL0 where idx is ARMV8_PMU_CYCLE_IDX (31). */ static void kvm_vcpu_pmu_write_evtype_direct(int idx, u32 val) { switch (idx) { PMEVTYPER_CASES(WRITE); + case ARMV8_PMU_CYCLE_IDX: + write_sysreg(val, pmccfiltr_el0); + break; default: WARN_ON(1); } From patchwork Fri May 3 12:44:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10928671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95B501398 for ; Fri, 3 May 2019 12:48:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84598204FD for ; Fri, 3 May 2019 12:48:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 78A9E28595; Fri, 3 May 2019 12:48:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BE8F204FD for ; Fri, 3 May 2019 12:48:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728134AbfECMr7 (ORCPT ); Fri, 3 May 2019 08:47:59 -0400 Received: from foss.arm.com ([217.140.101.70]:32980 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728133AbfECMr7 (ORCPT ); Fri, 3 May 2019 08:47:59 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C7071684; Fri, 3 May 2019 05:47:59 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 045BC3F220; Fri, 3 May 2019 05:47:55 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Amit Daniel Kachhap , Andrew Jones , Andrew Murray , Christoffer Dall , Dave Martin , Julien Grall , Julien Thierry , Kristina Martsenko , Mark Rutland , Peter Maydell , Suzuki K Poulose , Will Deacon , "zhang . lei" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 56/56] KVM: arm64: Fix ptrauth ID register masking logic Date: Fri, 3 May 2019 13:44:27 +0100 Message-Id: <20190503124427.190206-57-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190503124427.190206-1-marc.zyngier@arm.com> References: <20190503124427.190206-1-marc.zyngier@arm.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kristina Martsenko When a VCPU doesn't have pointer auth, we want to hide all four pointer auth ID register fields from the guest, not just one of them. Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers") Reported-by: Andrew Murray Fscked-up-by: Marc Zyngier Acked-by: Will Deacon Tested-by: Andrew Murray Signed-off-by: Kristina Martsenko Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9d02643bc601..857b226bcdde 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1088,10 +1088,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) { val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { - val &= ~(0xfUL << ID_AA64ISAR1_APA_SHIFT) | - (0xfUL << ID_AA64ISAR1_API_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | - (0xfUL << ID_AA64ISAR1_GPI_SHIFT); + val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | + (0xfUL << ID_AA64ISAR1_API_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | + (0xfUL << ID_AA64ISAR1_GPI_SHIFT)); } return val;