From patchwork Tue Sep 24 15:20:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heyi Guo X-Patchwork-Id: 11159073 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFB3714ED for ; Tue, 24 Sep 2019 15:22:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AE63E217D7 for ; Tue, 24 Sep 2019 15:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732590AbfIXPWb (ORCPT ); Tue, 24 Sep 2019 11:22:31 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:46846 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727382AbfIXPWa (ORCPT ); Tue, 24 Sep 2019 11:22:30 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 3D655AAB33C626A11C05; Tue, 24 Sep 2019 23:22:29 +0800 (CST) Received: from linux-Bxxcye.huawei.com (10.175.104.222) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Tue, 24 Sep 2019 23:22:20 +0800 From: Heyi Guo To: , , , , CC: , Heyi Guo , Peter Maydell , Dave Martin , Marc Zyngier , Mark Rutland , James Morse , Julien Thierry , "Suzuki K Poulose" , Russell King , Catalin Marinas , Will Deacon Subject: [RFC PATCH 1/2] kvm/arm: add capability to forward hypercall to user space Date: Tue, 24 Sep 2019 23:20:53 +0800 Message-ID: <1569338454-26202-2-git-send-email-guoheyi@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1569338454-26202-1-git-send-email-guoheyi@huawei.com> References: <1569338454-26202-1-git-send-email-guoheyi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.222] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As more SMC/HVC usages emerge on arm64 platforms, like SDEI, it makes sense for kvm to have the capability of forwarding such calls to user space for further emulation. We reuse the existing term "hypercall" for SMC/HVC, as well as the hypercall structure in kvm_run to exchange arguments and return values. The definition on arm64 is as below: exit_reason: KVM_EXIT_HYPERCALL Input: nr: the immediate value of SMC/HVC calls; not really used today. args[6]: x0..x5 (This is not fully conform with SMCCC which requires x6 as argument as well, but use space can use GET_ONE_REG ioctl for such rare case). Return: args[0..3]: x0..x3 as defined in SMCCC. We need to extract args[0..3] and write them to x0..x3 when hypercall exit returns. Flag hypercall_forward is added to turn on/off hypercall forwarding and the default is false. Another flag hypercall_excl_psci is to exclude PSCI from forwarding for backward compatible, and it only makes sense to check its value when hypercall_forward is enabled. Signed-off-by: Heyi Guo Cc: Peter Maydell Cc: Dave Martin Cc: Marc Zyngier Cc: Mark Rutland Cc: James Morse Cc: Julien Thierry Cc: Suzuki K Poulose CC: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_host.h | 5 +++++ arch/arm64/include/asm/kvm_host.h | 5 +++++ include/kvm/arm_psci.h | 1 + virt/kvm/arm/arm.c | 2 ++ virt/kvm/arm/psci.c | 30 ++++++++++++++++++++++++++++-- 5 files changed, 41 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 8a37c8e..68ccaf0 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -76,6 +76,11 @@ struct kvm_arch { /* Mandated version of PSCI */ u32 psci_version; + + /* Flags to control hypercall forwarding to userspace */ + bool hypercall_forward; + /* Exclude PSCI from hypercall forwarding and let kvm to handle it */ + bool hypercall_excl_psci; }; #define KVM_NR_MEM_OBJS 40 diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f656169..e47ac25 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -83,6 +83,11 @@ struct kvm_arch { /* Mandated version of PSCI */ u32 psci_version; + + /* Flags to control hypercall forwarding to userspace */ + bool hypercall_forward; + /* Exclude PSCI from hypercall forwarding and let kvm to handle it */ + bool hypercall_excl_psci; }; #define KVM_NR_MEM_OBJS 40 diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h index 632e78b..9c9a2dc 100644 --- a/include/kvm/arm_psci.h +++ b/include/kvm/arm_psci.h @@ -48,5 +48,6 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm) int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +void kvm_handle_hypercall_return(struct kvm_vcpu *vcpu, struct kvm_run *run); #endif /* __KVM_ARM_PSCI_H__ */ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 35a0698..2f4ca21 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -673,6 +673,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ret = kvm_handle_mmio_return(vcpu, vcpu->run); if (ret) return ret; + } else if (run->exit_reason == KVM_EXIT_HYPERCALL) { + kvm_handle_hypercall_return(vcpu, vcpu->run); } if (run->immediate_exit) diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c index 87927f7..7e1f735 100644 --- a/virt/kvm/arm/psci.c +++ b/virt/kvm/arm/psci.c @@ -389,6 +389,7 @@ static int kvm_psci_call(struct kvm_vcpu *vcpu) int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) { + struct kvm *kvm = vcpu->kvm; u32 func_id = smccc_get_function(vcpu); u32 val = SMCCC_RET_NOT_SUPPORTED; u32 feature; @@ -428,8 +429,27 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) break; } break; - default: - return kvm_psci_call(vcpu); + default: { + if (!kvm->arch.hypercall_forward || + kvm->arch.hypercall_excl_psci) { + u32 id = func_id & ~PSCI_0_2_64BIT; + + if (id >= PSCI_0_2_FN_BASE && id <= PSCI_0_2_FN(0x1f)) + return kvm_psci_call(vcpu); + } + + if (kvm->arch.hypercall_forward) { + /* Exit to user space to process */ + vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; + vcpu->run->hypercall.nr = kvm_vcpu_get_hsr(vcpu) & + ESR_ELx_ISS_MASK; + vcpu->run->hypercall.args[0] = func_id; + vcpu->run->hypercall.args[1] = smccc_get_arg1(vcpu); + vcpu->run->hypercall.args[2] = smccc_get_arg2(vcpu); + vcpu->run->hypercall.args[3] = smccc_get_arg3(vcpu); + return 0; + } + } } smccc_set_retval(vcpu, val, 0, 0, 0); @@ -603,3 +623,9 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL; } + +void kvm_handle_hypercall_return(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + smccc_set_retval(vcpu, run->hypercall.args[0], run->hypercall.args[1], + run->hypercall.args[2], run->hypercall.args[3]); +} From patchwork Tue Sep 24 15:20:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Heyi Guo X-Patchwork-Id: 11159075 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1222B14ED for ; Tue, 24 Sep 2019 15:22:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EDE21214DA for ; Tue, 24 Sep 2019 15:22:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732708AbfIXPWg (ORCPT ); Tue, 24 Sep 2019 11:22:36 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:46996 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732698AbfIXPWf (ORCPT ); Tue, 24 Sep 2019 11:22:35 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 4F6A85461CE3B3AB5C3B; Tue, 24 Sep 2019 23:22:34 +0800 (CST) Received: from linux-Bxxcye.huawei.com (10.175.104.222) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Tue, 24 Sep 2019 23:22:24 +0800 From: Heyi Guo To: , , , , CC: , Heyi Guo , Peter Maydell , Dave Martin , Marc Zyngier , Mark Rutland , James Morse , Julien Thierry , "Suzuki K Poulose" , Catalin Marinas , Will Deacon , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [RFC PATCH 2/2] kvm/arm64: expose hypercall_forwarding capability Date: Tue, 24 Sep 2019 23:20:54 +0800 Message-ID: <1569338454-26202-3-git-send-email-guoheyi@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1569338454-26202-1-git-send-email-guoheyi@huawei.com> References: <1569338454-26202-1-git-send-email-guoheyi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.222] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add new KVM capability "KVM_CAP_FORWARD_HYPERCALL" for user space to probe whether KVM supports forwarding hypercall. The capability should be enabled by user space explicitly, for we don't want user space application to deal with unexpected hypercall exits. We also use an additional argument to pass exception bit mask, to request KVM to forward all hypercalls except the classes specified in the bit mask. Currently only PSCI can be set as exception, so that we can still keep consistent with the old PSCI processing flow. Signed-off-by: Heyi Guo Cc: Peter Maydell Cc: Dave Martin Cc: Marc Zyngier Cc: Mark Rutland Cc: James Morse Cc: Julien Thierry Cc: Suzuki K Poulose Cc: Catalin Marinas Cc: Will Deacon Cc: Paolo Bonzini Cc: "Radim Krčmář" --- arch/arm64/kvm/reset.c | 25 +++++++++++++++++++++++++ include/uapi/linux/kvm.h | 3 +++ 2 files changed, 28 insertions(+) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f4a8ae9..2201b62 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -95,6 +95,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = has_vhe() && system_supports_address_auth() && system_supports_generic_auth(); break; + case KVM_CAP_FORWARD_HYPERCALL: + r = 1; + break; default: r = 0; } @@ -102,6 +105,28 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) return r; } +int kvm_vm_ioctl_enable_cap(struct kvm *kvm, + struct kvm_enable_cap *cap) +{ + if (cap->flags) + return -EINVAL; + + switch (cap->cap) { + case KVM_CAP_FORWARD_HYPERCALL: { + __u64 exclude_flags = cap->args[0]; + /* Only support excluding PSCI right now */ + if (exclude_flags & ~KVM_CAP_FORWARD_HYPERCALL_EXCL_PSCI) + return -EINVAL; + kvm->arch.hypercall_forward = true; + if (exclude_flags & KVM_CAP_FORWARD_HYPERCALL_EXCL_PSCI) + kvm->arch.hypercall_excl_psci = true; + return 0; + } + } + + return -EINVAL; +} + unsigned int kvm_sve_max_vl; int kvm_arm_init_sve(void) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5e3f12d..e3e5787 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -711,6 +711,8 @@ struct kvm_enable_cap { __u8 pad[64]; }; +#define KVM_CAP_FORWARD_HYPERCALL_EXCL_PSCI (1 << 0) + /* for KVM_PPC_GET_PVINFO */ #define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0) @@ -996,6 +998,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_PTRAUTH_ADDRESS 171 #define KVM_CAP_ARM_PTRAUTH_GENERIC 172 #define KVM_CAP_PMU_EVENT_FILTER 173 +#define KVM_CAP_FORWARD_HYPERCALL 174 #ifdef KVM_CAP_IRQ_ROUTING