From patchwork Fri May 8 03:29:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535409 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2ED33912 for ; Fri, 8 May 2020 03:30:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0907720731 for ; Fri, 8 May 2020 03:30:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aHu/QC0c"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Dm/7xupW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0907720731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EThg5RLpX+6EDlmhlw0pNGOTfrmlRm1L2l/KAaNfP8A=; b=aHu/QC0cwOhYAC D+VaFwQ8oo/gtd3vo/tbksb+A5N1mHq9HzSxRYRqQeUFZX4ohjFR0wojcMoNoMDY3nZ8ZYH8+YLRP ui0b4QZUqfCXYlCjn9so/mUHc+UwmhfN8a6YD7v+zuVMPjsUAEsrbbGjUo87FrXgUCnkajSM753z/ m5za1rc9Zkk+2BcIVWYJzsZBL9nlZ7puCfJEXdCmukAVuA9CgNFsBgABMoEILkOl9kcTkZ7Vp9QDP YziarIqWZ0JQefOMT058z9UfBUTZyCBWx9sixgxlyI7nXbjTJB4K6WSrS7AaehvyVE+ka3Sq3TbMC j7mXpFCp0JryWKrgYGZQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtiZ-0001dQ-5m; Fri, 08 May 2020 03:30:31 +0000 Received: from us-smtp-1.mimecast.com ([207.211.31.81] helo=us-smtp-delivery-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtiR-0001TS-E3 for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:30:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908622; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v2m5baSn+ZYgBiiLOUz37xt+J63O4vzP3FQSixT340E=; b=Dm/7xupWpqA7IlXpoaVqUKpD76A3Q44xKRPRZuAComz3DMvrvQoGFbUIr4VvZdgoGo3TEX DkMmFimmYZiiA0Kv6zWIkRxR89FRk5tGBaEMblqmHnjDHoJkhXiO0X3YMY1rNdlMMvJ9N/ /DRxLTU/6G8z+2NsRa/jFZfAEGGfvq0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-447-mK6p5XOzO-Gc_iPCxbJ03w-1; Thu, 07 May 2020 23:30:20 -0400 X-MC-Unique: mK6p5XOzO-Gc_iPCxbJ03w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5E47610082EA; Fri, 8 May 2020 03:30:18 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9342599CF; Fri, 8 May 2020 03:30:12 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 1/9] arm64: Probe for the presence of KVM hypervisor services during boot Date: Fri, 8 May 2020 13:29:11 +1000 Message-Id: <20200508032919.52147-2-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203023_575830_884D34C9 X-CRM114-Status: GOOD ( 12.36 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [207.211.31.81 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [207.211.31.81 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Will Deacon Although the SMCCC specification provides some limited functionality for describing the presence of hypervisor and firmware services, this is generally applicable only to functions designated as "Arm Architecture Service Functions" and no portable discovery mechanism is provided for standard hypervisor services, despite having a designated range of function identifiers reserved by the specification. In an attempt to avoid the need for additional firmware changes every time a new function is added, introduce a UID to identify the service provider as being compatible with KVM. Once this has been established, additional services can be discovered via a feature bitmap. Cc: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Gavin Shan --- arch/arm64/include/asm/hypervisor.h | 11 +++++++++ arch/arm64/kernel/setup.c | 35 +++++++++++++++++++++++++++++ include/linux/arm-smccc.h | 26 +++++++++++++++++++++ 3 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/hypervisor.h b/arch/arm64/include/asm/hypervisor.h index f9cc1d021791..91e4bd890819 100644 --- a/arch/arm64/include/asm/hypervisor.h +++ b/arch/arm64/include/asm/hypervisor.h @@ -2,6 +2,17 @@ #ifndef _ASM_ARM64_HYPERVISOR_H #define _ASM_ARM64_HYPERVISOR_H +#include #include +static inline bool kvm_arm_hyp_service_available(u32 func_id) +{ + extern DECLARE_BITMAP(__kvm_arm_hyp_services, ARM_SMCCC_KVM_NUM_FUNCS); + + if (func_id >= ARM_SMCCC_KVM_NUM_FUNCS) + return -EINVAL; + + return test_bit(func_id, __kvm_arm_hyp_services); +} + #endif diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 3fd2c11c09fc..61c3774c7bc9 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -275,6 +276,39 @@ static int __init reserve_memblock_reserved_regions(void) arch_initcall(reserve_memblock_reserved_regions); u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID }; +DECLARE_BITMAP(__kvm_arm_hyp_services, ARM_SMCCC_KVM_NUM_FUNCS) = { }; + +static void __init kvm_init_hyp_services(void) +{ + int i; + struct arm_smccc_res res; + + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) + return; + + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID, &res); + if (res.a0 != ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 || + res.a1 != ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 || + res.a2 != ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 || + res.a3 != ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3) + return; + + memset(&res, 0, sizeof(res)); + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID, &res); + for (i = 0; i < 32; ++i) { + if (res.a0 & (i)) + set_bit(i + (32 * 0), __kvm_arm_hyp_services); + if (res.a1 & (i)) + set_bit(i + (32 * 1), __kvm_arm_hyp_services); + if (res.a2 & (i)) + set_bit(i + (32 * 2), __kvm_arm_hyp_services); + if (res.a3 & (i)) + set_bit(i + (32 * 3), __kvm_arm_hyp_services); + } + + pr_info("KVM hypervisor services detected (0x%08lx 0x%08lx 0x%08lx 0x%08lx)\n", + res.a3, res.a2, res.a1, res.a0); +} void __init setup_arch(char **cmdline_p) { @@ -344,6 +378,7 @@ void __init setup_arch(char **cmdline_p) else psci_acpi_init(); + kvm_init_hyp_services(); init_bootcpu_ops(); smp_init_cpus(); smp_build_mpidr_hash(); diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 59494df0f55b..bdc0124a064a 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -46,11 +46,14 @@ #define ARM_SMCCC_OWNER_OEM 3 #define ARM_SMCCC_OWNER_STANDARD 4 #define ARM_SMCCC_OWNER_STANDARD_HYP 5 +#define ARM_SMCCC_OWNER_VENDOR_HYP 6 #define ARM_SMCCC_OWNER_TRUSTED_APP 48 #define ARM_SMCCC_OWNER_TRUSTED_APP_END 49 #define ARM_SMCCC_OWNER_TRUSTED_OS 50 #define ARM_SMCCC_OWNER_TRUSTED_OS_END 63 +#define ARM_SMCCC_FUNC_QUERY_CALL_UID 0xff01 + #define ARM_SMCCC_QUIRK_NONE 0 #define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */ @@ -77,6 +80,29 @@ ARM_SMCCC_SMC_32, \ 0, 0x7fff) +#define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_FUNC_QUERY_CALL_UID) + +/* KVM UID value: 28b46fb6-2ec5-11e9-a9ca-4b564d003a74 */ +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 0xb66fb428U +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 0xe911c52eU +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 0x564bcaa9U +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3 0x743a004dU + +/* KVM "vendor specific" services */ +#define ARM_SMCCC_KVM_FUNC_FEATURES 0 +#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127 +#define ARM_SMCCC_KVM_NUM_FUNCS 128 + +#define ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_FEATURES) + #ifndef __ASSEMBLY__ #include From patchwork Fri May 8 03:29:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1EC491862 for ; Fri, 8 May 2020 03:30:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E70D220731 for ; Fri, 8 May 2020 03:30:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="LWQyUsVi"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Y6/8bB2G" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E70D220731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iJhVBy6Lrq+z3fnhkALbPrXysLrhudwUpN1vjJPBQ58=; b=LWQyUsVicfOq2f PLPBXfVFchij/Eo54OYkiKTpaVLYpQtIyk26qzg4lzS0F+H9nLEFJ+1BNuUiZBt4tARl4WaApj8hc dUVSBqyYEmp9EIz3aPF8+8mEVthmxmvbniUX9xjVAboyqO8LTrajgE6k4R4gcqdcqfBlS+eAZc85e JjPyPe6saJAKPn9q//xbFEUXf4u5EzTQyCIu8Uffe22d0F862Q9njY8+iqPnd3ttgV9KbjWaHOn5Z XiXVAeWbNIx+WgeEk8M37rWL+JBSMgTwfWRf9Pgqkn5Ww3FLG+xnlaHwgasi5tvm3kadoYDie3JfX Um/fvvWfZt9ZjFrzP9Fg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtit-0001zF-3h; Fri, 08 May 2020 03:30:51 +0000 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120] helo=us-smtp-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtik-0001mr-21 for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:30:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908639; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Csjw1oJ4J698WGMh2vR+oyE/YmVcz5jcpnkz9JnfFV0=; b=Y6/8bB2GjBtfw+uu5owNFRoUtmiyjBI0gnZJbJ/wRWDGndW03cTQ/u+kOnt4bxzR7bkh87 g/8xAED0/egBBfvsLQDhi5p0bmNlYHlTPeeQEJeaNtqxzZetXv3vVzaPBAbSr3DALrLm6v 85aFElcRLrA/JLo6Kgt3AhIeshC6wdI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-132-rEg6wt1hMNOqqL4wSForLg-1; Thu, 07 May 2020 23:30:31 -0400 X-MC-Unique: rEg6wt1hMNOqqL4wSForLg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2BF5780058A; Fri, 8 May 2020 03:30:30 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id C0F8099CF; Fri, 8 May 2020 03:30:18 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 2/9] arm/arm64: KVM: Advertise KVM UID to guests via SMCCC Date: Fri, 8 May 2020 13:29:12 +1000 Message-Id: <20200508032919.52147-3-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203042_226868_7FC5FAE4 X-CRM114-Status: GOOD ( 10.10 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [205.139.110.120 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [205.139.110.120 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Will Deacon We can advertise ourselves to guests as KVM and provide a basic features bitmap for discoverability of future hypervisor services. Cc: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Gavin Shan --- virt/kvm/arm/hypercalls.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c index 550dfa3e53cd..db6dce3d0e23 100644 --- a/virt/kvm/arm/hypercalls.c +++ b/virt/kvm/arm/hypercalls.c @@ -12,13 +12,13 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) { u32 func_id = smccc_get_function(vcpu); - long val = SMCCC_RET_NOT_SUPPORTED; + u32 val[4] = {SMCCC_RET_NOT_SUPPORTED}; u32 feature; gpa_t gpa; switch (func_id) { case ARM_SMCCC_VERSION_FUNC_ID: - val = ARM_SMCCC_VERSION_1_1; + val[0] = ARM_SMCCC_VERSION_1_1; break; case ARM_SMCCC_ARCH_FEATURES_FUNC_ID: feature = smccc_get_arg1(vcpu); @@ -28,10 +28,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) case KVM_BP_HARDEN_UNKNOWN: break; case KVM_BP_HARDEN_WA_NEEDED: - val = SMCCC_RET_SUCCESS; + val[0] = SMCCC_RET_SUCCESS; break; case KVM_BP_HARDEN_NOT_REQUIRED: - val = SMCCC_RET_NOT_REQUIRED; + val[0] = SMCCC_RET_NOT_REQUIRED; break; } break; @@ -41,31 +41,40 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) case KVM_SSBD_UNKNOWN: break; case KVM_SSBD_KERNEL: - val = SMCCC_RET_SUCCESS; + val[0] = SMCCC_RET_SUCCESS; break; case KVM_SSBD_FORCE_ENABLE: case KVM_SSBD_MITIGATED: - val = SMCCC_RET_NOT_REQUIRED; + val[0] = SMCCC_RET_NOT_REQUIRED; break; } break; case ARM_SMCCC_HV_PV_TIME_FEATURES: - val = SMCCC_RET_SUCCESS; + val[0] = SMCCC_RET_SUCCESS; break; } break; case ARM_SMCCC_HV_PV_TIME_FEATURES: - val = kvm_hypercall_pv_features(vcpu); + val[0] = kvm_hypercall_pv_features(vcpu); break; case ARM_SMCCC_HV_PV_TIME_ST: gpa = kvm_init_stolen_time(vcpu); if (gpa != GPA_INVALID) - val = gpa; + val[0] = gpa; + break; + case ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID: + val[0] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0; + val[1] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1; + val[2] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2; + val[3] = ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3; + break; + case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID: + val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); break; default: return kvm_psci_call(vcpu); } - smccc_set_retval(vcpu, val, 0, 0, 0); + smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]); return 1; } From patchwork Fri May 8 03:29:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535413 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 909EC913 for ; Fri, 8 May 2020 03:31:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6BD402085B for ; Fri, 8 May 2020 03:31:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="JqRbnJZb"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QjMZAZw1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6BD402085B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=72ROqO9tE5mXJ1rsbCPHNHcIwCW92ei/ILjd6NBGuFI=; b=JqRbnJZbo8SLN0 ShwQg7pgYtQXFcA49F8kS/xrpNsdJjBq47k2MEoyfkQclYJ/0m2sxaUMAH8YWUEeJibxOiaTYEYL/ XG4wuxqpBgGiu04GfiPYpNj+APMdDjCv5rWOgXeUl6FZMZ1+JYoawuuNYpBJ2AnPOGcUApI0/U4dL fsNYrRAD3FoEMH+tTQ4XowGV+NNX++JkpzAqBAWJKJ4tgglwZapBk7K89GxHCM8ci/ixZxSJjhJ4A j8q7NgM6GRfll/oXhtc2InO26SIFHSGrUtBcycAEaoKT2W/uxytIxonqtnmrzk4k6jv+8TkI9Qis8 C5myu6kcNPSD/St2tRzQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjD-0002G0-6v; Fri, 08 May 2020 03:31:11 +0000 Received: from us-smtp-1.mimecast.com ([205.139.110.61] helo=us-smtp-delivery-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtik-0001oI-Cq for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:30:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OiEfbYKbF/0R0nvHgZRpfTMXmb9dfkEGxGX1chqOS9k=; b=QjMZAZw1xwvL0zbcXTB4mB+BBjO+PSqSDYrnCFoAelXDJoCorHLvK9YpcVqX9bUqPXd1PR z6TUNwEU0DosNGaREItMWe23/1GmW8hQyVVBuO2yZz8UyYTqwmcKCKl3AToqJD3oL4xsf4 7WhtqDAFE2cOdJ+lbuII5u1pYRjmXpE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-262-FBU3NegrO-KCAf7qo9fOmQ-1; Thu, 07 May 2020 23:30:36 -0400 X-MC-Unique: FBU3NegrO-KCAf7qo9fOmQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 31299835B40; Fri, 8 May 2020 03:30:35 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id A31B899CF; Fri, 8 May 2020 03:30:30 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 3/9] kvm/arm64: Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() Date: Fri, 8 May 2020 13:29:13 +1000 Message-Id: <20200508032919.52147-4-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203042_614322_F66A2CA9 X-CRM114-Status: GOOD ( 18.79 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [205.139.110.61 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [205.139.110.61 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Since kvm/arm32 was removed, this renames kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() to it a bit more self-explaining because the functions returns ESR instead of HSR on aarch64. This shouldn't cause any functional changes. Signed-off-by: Gavin Shan --- arch/arm64/include/asm/kvm_emulate.h | 36 +++++++++++++++------------- arch/arm64/kvm/handle_exit.c | 12 +++++----- arch/arm64/kvm/hyp/switch.c | 2 +- arch/arm64/kvm/sys_regs.c | 6 ++--- virt/kvm/arm/hyp/aarch32.c | 2 +- virt/kvm/arm/hyp/vgic-v3-sr.c | 4 ++-- virt/kvm/arm/mmu.c | 6 ++--- 7 files changed, 35 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index a30b4eec7cb4..bd1a69e7c104 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -265,14 +265,14 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) return mode != PSR_MODE_EL0t; } -static __always_inline u32 kvm_vcpu_get_hsr(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) { return vcpu->arch.fault.esr_el2; } static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) { - u32 esr = kvm_vcpu_get_hsr(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); if (esr & ESR_ELx_CV) return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT; @@ -297,64 +297,66 @@ static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_xVC_IMM_MASK; + return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK; } static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV); } static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_hsr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); + return kvm_vcpu_get_esr(vcpu) & + (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); } static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE); } static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SF); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF); } static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu) { - return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; + return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; } static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW); } static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) || + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) || kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */ } static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM); } static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) { - return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); + return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >> + ESR_ELx_SAS_SHIFT); } /* This one is not specific to Data Abort */ static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_IL); + return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL); } static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu) { - return ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu)); + return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); } static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) @@ -364,12 +366,12 @@ static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC; + return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; } static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE; + return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE; } static __always_inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu) @@ -393,7 +395,7 @@ static __always_inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu) static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) { - u32 esr = kvm_vcpu_get_hsr(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); return ESR_ELx_SYS64_ISS_RT(esr); } diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index aacfc55de44c..c5b75a4d5eda 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -89,7 +89,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run) */ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run) { - if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) { + if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); vcpu->stat.wfe_exit_stat++; kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu)); @@ -119,7 +119,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run) */ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) { - u32 hsr = kvm_vcpu_get_hsr(vcpu); + u32 hsr = kvm_vcpu_get_esr(vcpu); int ret = 0; run->exit_reason = KVM_EXIT_DEBUG; @@ -146,7 +146,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run) { - u32 hsr = kvm_vcpu_get_hsr(vcpu); + u32 hsr = kvm_vcpu_get_esr(vcpu); kvm_pr_unimpl("Unknown exception class: hsr: %#08x -- %s\n", hsr, esr_get_class_string(hsr)); @@ -226,7 +226,7 @@ static exit_handle_fn arm_exit_handlers[] = { static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu) { - u32 hsr = kvm_vcpu_get_hsr(vcpu); + u32 hsr = kvm_vcpu_get_esr(vcpu); u8 hsr_ec = ESR_ELx_EC(hsr); return arm_exit_handlers[hsr_ec]; @@ -267,7 +267,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index) { if (ARM_SERROR_PENDING(exception_index)) { - u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu)); + u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); /* * HVC/SMC already have an adjusted PC, which we need @@ -333,5 +333,5 @@ void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run, exception_index = ARM_EXCEPTION_CODE(exception_index); if (exception_index == ARM_EXCEPTION_EL1_SERROR) - kvm_handle_guest_serror(vcpu, kvm_vcpu_get_hsr(vcpu)); + kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu)); } diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 8a1e81a400e0..2c3242bcfed2 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -437,7 +437,7 @@ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) { - u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); u64 val = vcpu_get_reg(vcpu, rt); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 51db934702b6..5b61465927b7 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2214,7 +2214,7 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu, size_t nr_specific) { struct sys_reg_params params; - u32 hsr = kvm_vcpu_get_hsr(vcpu); + u32 hsr = kvm_vcpu_get_esr(vcpu); int Rt = kvm_vcpu_sys_get_rt(vcpu); int Rt2 = (hsr >> 10) & 0x1f; @@ -2271,7 +2271,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, size_t nr_specific) { struct sys_reg_params params; - u32 hsr = kvm_vcpu_get_hsr(vcpu); + u32 hsr = kvm_vcpu_get_esr(vcpu); int Rt = kvm_vcpu_sys_get_rt(vcpu); params.is_aarch32 = true; @@ -2387,7 +2387,7 @@ static void reset_sys_reg_descs(struct kvm_vcpu *vcpu, int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run) { struct sys_reg_params params; - unsigned long esr = kvm_vcpu_get_hsr(vcpu); + unsigned long esr = kvm_vcpu_get_esr(vcpu); int Rt = kvm_vcpu_sys_get_rt(vcpu); int ret; diff --git a/virt/kvm/arm/hyp/aarch32.c b/virt/kvm/arm/hyp/aarch32.c index d31f267961e7..864b477e660a 100644 --- a/virt/kvm/arm/hyp/aarch32.c +++ b/virt/kvm/arm/hyp/aarch32.c @@ -51,7 +51,7 @@ bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) int cond; /* Top two bits non-zero? Unconditional. */ - if (kvm_vcpu_get_hsr(vcpu) >> 30) + if (kvm_vcpu_get_esr(vcpu) >> 30) return true; /* Is condition field valid? */ diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c index ccf1fde9836c..8a7a14ec9120 100644 --- a/virt/kvm/arm/hyp/vgic-v3-sr.c +++ b/virt/kvm/arm/hyp/vgic-v3-sr.c @@ -441,7 +441,7 @@ static int __hyp_text __vgic_v3_bpr_min(void) static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) { - u32 esr = kvm_vcpu_get_hsr(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; return crm != 8; @@ -1007,7 +1007,7 @@ int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) bool is_read; u32 sysreg; - esr = kvm_vcpu_get_hsr(vcpu); + esr = kvm_vcpu_get_esr(vcpu); if (vcpu_mode_is_32bit(vcpu)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e3b9ee268823..5da0d0e7519b 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1922,7 +1922,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) * For RAS the host kernel may handle this abort. * There is no need to pass the error into the guest. */ - if (!kvm_handle_guest_sea(fault_ipa, kvm_vcpu_get_hsr(vcpu))) + if (!kvm_handle_guest_sea(fault_ipa, kvm_vcpu_get_esr(vcpu))) return 1; if (unlikely(!is_iabt)) { @@ -1931,7 +1931,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) } } - trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_hsr(vcpu), + trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), kvm_vcpu_get_hfar(vcpu), fault_ipa); /* Check the stage-2 fault is trans. fault or write fault */ @@ -1940,7 +1940,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n", kvm_vcpu_trap_get_class(vcpu), (unsigned long)kvm_vcpu_trap_get_fault(vcpu), - (unsigned long)kvm_vcpu_get_hsr(vcpu)); + (unsigned long)kvm_vcpu_get_esr(vcpu)); return -EFAULT; } From patchwork Fri May 8 03:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535415 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46D32913 for ; Fri, 8 May 2020 03:31:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E522F20731 for ; Fri, 8 May 2020 03:31:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IQ6n6Dvr"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FyQ12mCz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E522F20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eGEEdTeLtXRHjaKhENFqCHOYgLmT3kjp/MOdE84saG8=; b=IQ6n6DvrN9zR/1 rsZtKgJ6xR68D5qWypSdsmoIUoyVmQX1O3TO3iNtmvEaA1UxYKUdKk3hb/l3TkfJjLMUeMWdk5Pvj BS7NjLeZCHI0UeqQYLVMusJTlDWIVLUONDLJZkVdIVyQ4esqAyWGG93a7osAQCtaInBFdBClWk6rP znm09Byqug/BhNEe1uDBxIf9xxlFqt44Ono1ffmqoE26K8uxs4hqCgP1pyWeFiLokmHt1J8C1XDSz G39T4vrDHEC32yj4Q0sAWRqxIN1s7mBlqxBO6E/KKWe0h0gUdFSw6zStUzfWD/Ao3jg71IFvnfp/P fwnDYvrrJqFMshnUldIA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjW-0002Ui-9x; Fri, 08 May 2020 03:31:30 +0000 Received: from us-smtp-1.mimecast.com ([207.211.31.81] helo=us-smtp-delivery-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtit-0001ym-59 for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:30:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h9lhs1PcBHTpdjq8SBzGlhFV+gCwN7XkqnGXb6+NKkI=; b=FyQ12mCzWNGIBotYQDvb8atYkO7oINEaIuPQzc55F3/KBnyaALVXEyyGLv276LRasA8K4p 6Jg7N5LZtI5ht6AjRVT/yVbOLcMKxkOxSBqEK5zvkTekAgVO08jyRHsmKWLIjz8x+ht8Mf pJhSW4OiG2hYv+RDxGa8Vr2ziKgIdJY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-39-qECf7BuGNIebsBd8UiLMzg-1; Thu, 07 May 2020 23:30:46 -0400 X-MC-Unique: qECf7BuGNIebsBd8UiLMzg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 76DC8107ACCA; Fri, 8 May 2020 03:30:44 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id A46EC99CF; Fri, 8 May 2020 03:30:35 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 4/9] kvm/arm64: Detach ESR operator from vCPU struct Date: Fri, 8 May 2020 13:29:14 +1000 Message-Id: <20200508032919.52147-5-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203051_356434_0E70D1DA X-CRM114-Status: GOOD ( 17.45 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [207.211.31.81 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [207.211.31.81 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org There are a set of inline functions defined in kvm_emulate.h. Those functions reads ESR from vCPU fault information struct and then operate on it. So it's tied with vCPU fault information and vCPU struct. It limits their usage scope. This detaches these functions from the vCPU struct. With this, the caller has flexibility on where the ESR is read. It shouldn't cause any functional changes. Signed-off-by: Gavin Shan --- arch/arm64/include/asm/kvm_emulate.h | 83 +++++++++++------------- arch/arm64/kvm/handle_exit.c | 20 ++++-- arch/arm64/kvm/hyp/switch.c | 24 ++++--- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 7 +- arch/arm64/kvm/inject_fault.c | 4 +- arch/arm64/kvm/sys_regs.c | 12 ++-- virt/kvm/arm/arm.c | 4 +- virt/kvm/arm/hyp/aarch32.c | 2 +- virt/kvm/arm/hyp/vgic-v3-sr.c | 5 +- virt/kvm/arm/mmio.c | 27 ++++---- virt/kvm/arm/mmu.c | 22 ++++--- 11 files changed, 112 insertions(+), 98 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index bd1a69e7c104..2873bf6dc85e 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -270,10 +270,8 @@ static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) return vcpu->arch.fault.esr_el2; } -static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) +static __always_inline int kvm_vcpu_get_condition(u32 esr) { - u32 esr = kvm_vcpu_get_esr(vcpu); - if (esr & ESR_ELx_CV) return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT; @@ -295,88 +293,86 @@ static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) return vcpu->arch.fault.disr_el1; } -static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_vcpu_hvc_get_imm(u32 esr) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK; + return esr & ESR_ELx_xVC_IMM_MASK; } -static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_isvalid(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV); + return !!(esr & ESR_ELx_ISV); } -static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) +static __always_inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(u32 esr) { - return kvm_vcpu_get_esr(vcpu) & - (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); + return esr & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); } -static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_issext(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE); + return !!(esr & ESR_ELx_SSE); } -static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_issf(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF); + return !!(esr & ESR_ELx_SF); } -static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu) +static __always_inline int kvm_vcpu_dabt_get_rd(u32 esr) { - return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; + return (esr & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; } -static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_iss1tw(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW); + return !!(esr & ESR_ELx_S1PTW); } -static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_iswrite(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) || - kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */ + return !!(esr & ESR_ELx_WNR) || + kvm_vcpu_dabt_iss1tw(esr); /* AF/DBM update */ } -static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_is_cm(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM); + return !!(esr & ESR_ELx_CM); } -static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) +static __always_inline unsigned int kvm_vcpu_dabt_get_as(u32 esr) { - return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >> - ESR_ELx_SAS_SHIFT); + return 1 << ((esr & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); } /* This one is not specific to Data Abort */ -static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_trap_il_is32bit(u32 esr) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL); + return !!(esr & ESR_ELx_IL); } -static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu) +static __always_inline u8 kvm_vcpu_trap_get_class(u32 esr) { - return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); + return ESR_ELx_EC(esr); } -static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_trap_is_iabt(u32 esr) { - return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW; + return kvm_vcpu_trap_get_class(esr) == ESR_ELx_EC_IABT_LOW; } -static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) +static __always_inline u8 kvm_vcpu_trap_get_fault(u32 esr) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; + return esr & ESR_ELx_FSC; } -static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) +static __always_inline u8 kvm_vcpu_trap_get_fault_type(u32 esr) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE; + return esr & ESR_ELx_FSC_TYPE; } -static __always_inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_dabt_isextabt(u32 esr) { - switch (kvm_vcpu_trap_get_fault(vcpu)) { + switch (kvm_vcpu_trap_get_fault(esr)) { case FSC_SEA: case FSC_SEA_TTW0: case FSC_SEA_TTW1: @@ -393,18 +389,17 @@ static __always_inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu) } } -static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) +static __always_inline int kvm_vcpu_sys_get_rt(u32 esr) { - u32 esr = kvm_vcpu_get_esr(vcpu); return ESR_ELx_SYS64_ISS_RT(esr); } -static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) +static __always_inline bool kvm_is_write_fault(u32 esr) { - if (kvm_vcpu_trap_is_iabt(vcpu)) + if (kvm_vcpu_trap_is_iabt(esr)) return false; - return kvm_vcpu_dabt_iswrite(vcpu); + return kvm_vcpu_dabt_iswrite(esr); } static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) @@ -527,7 +522,7 @@ static __always_inline void __hyp_text __kvm_skip_instr(struct kvm_vcpu *vcpu) *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(kvm_vcpu_get_esr(vcpu))); write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, SYS_SPSR); write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index c5b75a4d5eda..00858db82a64 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -38,7 +38,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run) int ret; trace_kvm_hvc_arm64(*vcpu_pc(vcpu), vcpu_get_reg(vcpu, 0), - kvm_vcpu_hvc_get_imm(vcpu)); + kvm_vcpu_hvc_get_imm(kvm_vcpu_get_esr(vcpu))); vcpu->stat.hvc_exit_stat++; ret = kvm_hvc_call_handler(vcpu); @@ -52,6 +52,8 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run) static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run) { + u32 esr = kvm_vcpu_get_esr(vcpu); + /* * "If an SMC instruction executed at Non-secure EL1 is * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a @@ -61,7 +63,7 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run) * otherwise return to the same address... */ vcpu_set_reg(vcpu, 0, ~0UL); - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); return 1; } @@ -89,7 +91,9 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run) */ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run) { - if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) { + u32 esr = kvm_vcpu_get_esr(vcpu); + + if (esr & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); vcpu->stat.wfe_exit_stat++; kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu)); @@ -100,7 +104,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run) kvm_clear_request(KVM_REQ_UNHALT, vcpu); } - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); return 1; } @@ -240,6 +244,7 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu) */ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run) { + u32 esr = kvm_vcpu_get_esr(vcpu); int handled; /* @@ -247,7 +252,7 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run) * that fail their condition code check" */ if (!kvm_condition_valid(vcpu)) { - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); handled = 1; } else { exit_handle_fn exit_handler; @@ -267,7 +272,8 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index) { if (ARM_SERROR_PENDING(exception_index)) { - u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); + u32 esr = kvm_vcpu_get_esr(vcpu); + u8 hsr_ec = ESR_ELx_EC(esr); /* * HVC/SMC already have an adjusted PC, which we need @@ -276,7 +282,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, */ if (hsr_ec == ESR_ELx_EC_HVC32 || hsr_ec == ESR_ELx_EC_HVC64 || hsr_ec == ESR_ELx_EC_SMC32 || hsr_ec == ESR_ELx_EC_SMC64) { - u32 adj = kvm_vcpu_trap_il_is32bit(vcpu) ? 4 : 2; + u32 adj = kvm_vcpu_trap_il_is32bit(esr) ? 4 : 2; *vcpu_pc(vcpu) -= adj; } diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 2c3242bcfed2..369f22f49f3d 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -355,6 +355,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) /* Check for an FPSIMD/SVE trap and handle as appropriate */ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { + u32 esr = kvm_vcpu_get_esr(vcpu); bool vhe, sve_guest, sve_host; u8 hsr_ec; @@ -371,7 +372,7 @@ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) vhe = has_vhe(); } - hsr_ec = kvm_vcpu_trap_get_class(vcpu); + hsr_ec = kvm_vcpu_trap_get_class(esr); if (hsr_ec != ESR_ELx_EC_FP_ASIMD && hsr_ec != ESR_ELx_EC_SVE) return false; @@ -438,7 +439,8 @@ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); - int rt = kvm_vcpu_sys_get_rt(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); + int rt = kvm_vcpu_sys_get_rt(esr); u64 val = vcpu_get_reg(vcpu, rt); /* @@ -497,6 +499,8 @@ static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) */ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + u32 esr = kvm_vcpu_get_esr(vcpu); + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -510,7 +514,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) goto exit; if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && + kvm_vcpu_trap_get_class(esr) == ESR_ELx_EC_SYS64 && handle_tx2_tvm(vcpu)) return true; @@ -530,11 +534,11 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { bool valid; - valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && - kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && - kvm_vcpu_dabt_isvalid(vcpu) && - !kvm_vcpu_dabt_isextabt(vcpu) && - !kvm_vcpu_dabt_iss1tw(vcpu); + valid = kvm_vcpu_trap_get_class(esr) == ESR_ELx_EC_DABT_LOW && + kvm_vcpu_trap_get_fault_type(esr) == FSC_FAULT && + kvm_vcpu_dabt_isvalid(esr) && + !kvm_vcpu_dabt_isextabt(esr) && + !kvm_vcpu_dabt_iss1tw(esr); if (valid) { int ret = __vgic_v2_perform_cpuif_access(vcpu); @@ -551,8 +555,8 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) } if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { + (kvm_vcpu_trap_get_class(esr) == ESR_ELx_EC_SYS64 || + kvm_vcpu_trap_get_class(esr) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); if (ret == 1) diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 4f3a087e36d5..bcf13a074b69 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -36,6 +36,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; + u32 esr = kvm_vcpu_get_esr(vcpu); phys_addr_t fault_ipa; void __iomem *addr; int rd; @@ -50,7 +51,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return 0; /* Reject anything but a 32bit access */ - if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) { + if (kvm_vcpu_dabt_get_as(esr) != sizeof(u32)) { __kvm_skip_instr(vcpu); return -1; } @@ -61,11 +62,11 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return -1; } - rd = kvm_vcpu_dabt_get_rd(vcpu); + rd = kvm_vcpu_dabt_get_rd(esr); addr = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va; addr += fault_ipa - vgic->vgic_cpu_base; - if (kvm_vcpu_dabt_iswrite(vcpu)) { + if (kvm_vcpu_dabt_iswrite(esr)) { u32 data = vcpu_get_reg(vcpu, rd); if (__is_be(vcpu)) { /* guest pre-swabbed data, undo this for writel() */ diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 6aafc2825c1c..0ae7c2e40e02 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -128,7 +128,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr * Build an {i,d}abort, depending on the level and the * instruction set. Report an external synchronous abort. */ - if (kvm_vcpu_trap_il_is32bit(vcpu)) + if (kvm_vcpu_trap_il_is32bit(kvm_vcpu_get_esr(vcpu))) esr |= ESR_ELx_IL; /* @@ -161,7 +161,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) * Build an unknown exception, depending on the instruction * set. */ - if (kvm_vcpu_trap_il_is32bit(vcpu)) + if (kvm_vcpu_trap_il_is32bit(kvm_vcpu_get_esr(vcpu))) esr |= ESR_ELx_IL; vcpu_write_sys_reg(vcpu, esr, ESR_EL1); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 5b61465927b7..012fff834a4b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2126,6 +2126,7 @@ static void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) { + u32 esr = kvm_vcpu_get_esr(vcpu); trace_kvm_sys_access(*vcpu_pc(vcpu), params, r); /* Check for regs disabled by runtime config */ @@ -2143,7 +2144,7 @@ static void perform_access(struct kvm_vcpu *vcpu, /* Skip instruction if instructed so */ if (likely(r->access(vcpu, params, r))) - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); } /* @@ -2180,7 +2181,8 @@ static int emulate_cp(struct kvm_vcpu *vcpu, static void unhandled_cp_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params) { - u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); + u8 hsr_ec = kvm_vcpu_trap_get_class(esr); int cp = -1; switch(hsr_ec) { @@ -2215,7 +2217,7 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu, { struct sys_reg_params params; u32 hsr = kvm_vcpu_get_esr(vcpu); - int Rt = kvm_vcpu_sys_get_rt(vcpu); + int Rt = kvm_vcpu_sys_get_rt(hsr); int Rt2 = (hsr >> 10) & 0x1f; params.is_aarch32 = true; @@ -2272,7 +2274,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, { struct sys_reg_params params; u32 hsr = kvm_vcpu_get_esr(vcpu); - int Rt = kvm_vcpu_sys_get_rt(vcpu); + int Rt = kvm_vcpu_sys_get_rt(hsr); params.is_aarch32 = true; params.is_32bit = true; @@ -2388,7 +2390,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run) { struct sys_reg_params params; unsigned long esr = kvm_vcpu_get_esr(vcpu); - int Rt = kvm_vcpu_sys_get_rt(vcpu); + int Rt = kvm_vcpu_sys_get_rt(esr); int ret; trace_kvm_handle_sys_reg(esr); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 48d0ec44ad77..2cbb57485760 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -808,7 +808,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) * guest time. */ guest_exit(); - trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + trace_kvm_exit(ret, + kvm_vcpu_trap_get_class(kvm_vcpu_get_esr(vcpu)), + *vcpu_pc(vcpu)); /* Exit types that need handling before we can be preempted */ handle_exit_early(vcpu, run, ret); diff --git a/virt/kvm/arm/hyp/aarch32.c b/virt/kvm/arm/hyp/aarch32.c index 864b477e660a..df3055ab3a42 100644 --- a/virt/kvm/arm/hyp/aarch32.c +++ b/virt/kvm/arm/hyp/aarch32.c @@ -55,7 +55,7 @@ bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) return true; /* Is condition field valid? */ - cond = kvm_vcpu_get_condition(vcpu); + cond = kvm_vcpu_get_condition(kvm_vcpu_get_esr(vcpu)); if (cond == 0xE) return true; diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c index 8a7a14ec9120..bb2174b8a086 100644 --- a/virt/kvm/arm/hyp/vgic-v3-sr.c +++ b/virt/kvm/arm/hyp/vgic-v3-sr.c @@ -1000,14 +1000,13 @@ static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + u32 esr = kvm_vcpu_get_esr(vcpu); int rt; - u32 esr; u32 vmcr; void (*fn)(struct kvm_vcpu *, u32, int); bool is_read; u32 sysreg; - esr = kvm_vcpu_get_esr(vcpu); if (vcpu_mode_is_32bit(vcpu)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu); @@ -1119,7 +1118,7 @@ int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) } vmcr = __vgic_v3_read_vmcr(); - rt = kvm_vcpu_sys_get_rt(vcpu); + rt = kvm_vcpu_sys_get_rt(esr); fn(vcpu, vmcr, rt); __kvm_skip_instr(vcpu); diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c index aedfcff99ac5..d92bee8c75e3 100644 --- a/virt/kvm/arm/mmio.c +++ b/virt/kvm/arm/mmio.c @@ -81,6 +81,7 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len) */ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) { + u32 esr = kvm_vcpu_get_esr(vcpu); unsigned long data; unsigned int len; int mask; @@ -91,30 +92,30 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) vcpu->mmio_needed = 0; - if (!kvm_vcpu_dabt_iswrite(vcpu)) { - len = kvm_vcpu_dabt_get_as(vcpu); + if (!kvm_vcpu_dabt_iswrite(esr)) { + len = kvm_vcpu_dabt_get_as(esr); data = kvm_mmio_read_buf(run->mmio.data, len); - if (kvm_vcpu_dabt_issext(vcpu) && + if (kvm_vcpu_dabt_issext(esr) && len < sizeof(unsigned long)) { mask = 1U << ((len * 8) - 1); data = (data ^ mask) - mask; } - if (!kvm_vcpu_dabt_issf(vcpu)) + if (!kvm_vcpu_dabt_issf(esr)) data = data & 0xffffffff; trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr, &data); data = vcpu_data_host_to_guest(vcpu, data, len); - vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data); + vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(esr), data); } /* * The MMIO instruction is emulated and should not be re-executed * in the guest. */ - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); return 0; } @@ -122,6 +123,7 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, phys_addr_t fault_ipa) { + u32 esr = kvm_vcpu_get_esr(vcpu); unsigned long data; unsigned long rt; int ret; @@ -133,10 +135,11 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, * No valid syndrome? Ask userspace for help if it has * voluntered to do so, and bail out otherwise. */ - if (!kvm_vcpu_dabt_isvalid(vcpu)) { + if (!kvm_vcpu_dabt_isvalid(esr)) { if (vcpu->kvm->arch.return_nisv_io_abort_to_user) { run->exit_reason = KVM_EXIT_ARM_NISV; - run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu); + run->arm_nisv.esr_iss = + kvm_vcpu_dabt_iss_nisv_sanitized(esr); run->arm_nisv.fault_ipa = fault_ipa; return 0; } @@ -146,7 +149,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, } /* Page table accesses IO mem: tell guest to fix its TTBR */ - if (kvm_vcpu_dabt_iss1tw(vcpu)) { + if (kvm_vcpu_dabt_iss1tw(esr)) { kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); return 1; } @@ -156,9 +159,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, * from the CPU. Then try if some in-kernel emulation feels * responsible, otherwise let user space do its magic. */ - is_write = kvm_vcpu_dabt_iswrite(vcpu); - len = kvm_vcpu_dabt_get_as(vcpu); - rt = kvm_vcpu_dabt_get_rd(vcpu); + is_write = kvm_vcpu_dabt_iswrite(esr); + len = kvm_vcpu_dabt_get_as(esr); + rt = kvm_vcpu_dabt_get_rd(esr); if (is_write) { data = vcpu_data_guest_to_host(vcpu, vcpu_get_reg(vcpu, rt), diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 5da0d0e7519b..e462e0368fd9 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1661,6 +1661,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long fault_status) { int ret; + u32 esr = kvm_vcpu_get_esr(vcpu); bool write_fault, writable, force_pte = false; bool exec_fault, needs_exec; unsigned long mmu_seq; @@ -1674,8 +1675,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool logging_active = memslot_is_logging(memslot); unsigned long vma_pagesize, flags = 0; - write_fault = kvm_is_write_fault(vcpu); - exec_fault = kvm_vcpu_trap_is_iabt(vcpu); + write_fault = kvm_is_write_fault(esr); + exec_fault = kvm_vcpu_trap_is_iabt(esr); VM_BUG_ON(write_fault && exec_fault); if (fault_status == FSC_PERM && !write_fault && !exec_fault) { @@ -1903,6 +1904,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) */ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) { + u32 esr = kvm_vcpu_get_esr(vcpu); unsigned long fault_status; phys_addr_t fault_ipa; struct kvm_memory_slot *memslot; @@ -1911,13 +1913,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) gfn_t gfn; int ret, idx; - fault_status = kvm_vcpu_trap_get_fault_type(vcpu); + fault_status = kvm_vcpu_trap_get_fault_type(esr); fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); - is_iabt = kvm_vcpu_trap_is_iabt(vcpu); + is_iabt = kvm_vcpu_trap_is_iabt(esr); /* Synchronous External Abort? */ - if (kvm_vcpu_dabt_isextabt(vcpu)) { + if (kvm_vcpu_dabt_isextabt(esr)) { /* * For RAS the host kernel may handle this abort. * There is no need to pass the error into the guest. @@ -1938,8 +1940,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) if (fault_status != FSC_FAULT && fault_status != FSC_PERM && fault_status != FSC_ACCESS) { kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n", - kvm_vcpu_trap_get_class(vcpu), - (unsigned long)kvm_vcpu_trap_get_fault(vcpu), + kvm_vcpu_trap_get_class(esr), + (unsigned long)kvm_vcpu_trap_get_fault(esr), (unsigned long)kvm_vcpu_get_esr(vcpu)); return -EFAULT; } @@ -1949,7 +1951,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) gfn = fault_ipa >> PAGE_SHIFT; memslot = gfn_to_memslot(vcpu->kvm, gfn); hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); - write_fault = kvm_is_write_fault(vcpu); + write_fault = kvm_is_write_fault(esr); if (kvm_is_error_hva(hva) || (write_fault && !writable)) { if (is_iabt) { /* Prefetch Abort on I/O address */ @@ -1967,8 +1969,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) * So let's assume that the guest is just being * cautious, and skip the instruction. */ - if (kvm_vcpu_dabt_is_cm(vcpu)) { - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + if (kvm_vcpu_dabt_is_cm(esr)) { + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(esr)); ret = 1; goto out_unlock; } From patchwork Fri May 8 03:29:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535417 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 025DD913 for ; Fri, 8 May 2020 03:31:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB63F20731 for ; Fri, 8 May 2020 03:31:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Fk4txpti"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LcBrbhOC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB63F20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=75mOipNTnaDXwtINrBBjR/fplJlChTcyW1G/H6aghWM=; b=Fk4txptikm2vdE 0mIla0XvmDw08kasB/ZGzyRvRV0uo8smdv3VvPBGiGLiA+TqIksl2KG3E22OedI4tt6ctpixtX3Xi 2AJTK2KdW9Lmknu32WjF40Dk4hIrkXe7S3lewMB+cdC8Snih/+jJ4tzWbLHoJLmrXuSHftDveyw/X gddYjT+ofDPOtiUjwc0ytG40b6bhn3pL2pR9ainzddNRnF0EAEMK7voT2AcZsDB0XnU6JqQw6zNSy AyjY3h5vcFbirEXChTFHEHUNVvEzLrOVIGuazd5qHaQkaSOMoN8fYpoqxlCsTFFhTAg0N0f6bIez/ lM5OLP7QDD3sG8f8Nfig==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjr-0002nE-5b; Fri, 08 May 2020 03:31:51 +0000 Received: from us-smtp-2.mimecast.com ([205.139.110.61] helo=us-smtp-delivery-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtiy-00023k-7s for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:30:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pzTi+REweF4JYDCs1ctBNy9J9QKo4jpuv/qLMrtjXcg=; b=LcBrbhOCsjWm9nWYNmoV4IDYAEsqNHM1vUgrNx7oJEimDGpScGWFGjgzAyBZ/WwHm+u6Bh a2Eyfq7Yp0GYgbmUgx8LDJY0sX6okq1nD7H4lyDL40dqI4CGr/k8uqJ9ZgBsaaDQrqFm0K eppGUd4l9zZxLeivALT3K5s8HpW1QSA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-152-NUKTKxX2MyCAQDWHRAZ-lQ-1; Thu, 07 May 2020 23:30:52 -0400 X-MC-Unique: NUKTKxX2MyCAQDWHRAZ-lQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 19A06835B40; Fri, 8 May 2020 03:30:51 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id E52E899CF; Fri, 8 May 2020 03:30:44 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 5/9] kvm/arm64: Replace hsr with esr Date: Fri, 8 May 2020 13:29:15 +1000 Message-Id: <20200508032919.52147-6-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203056_377597_40409A97 X-CRM114-Status: GOOD ( 14.18 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [205.139.110.61 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [205.139.110.61 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This replace the variable names to make them self-explaining. The tracepoint isn't changed accordingly because they're part of ABI: * @hsr to @esr * @hsr_ec to @ec * Use kvm_vcpu_trap_get_class() helper if possible Signed-off-by: Gavin Shan --- arch/arm64/kvm/handle_exit.c | 28 ++++++++++++++-------------- arch/arm64/kvm/hyp/switch.c | 9 ++++----- arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++--------------- 3 files changed, 33 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 00858db82a64..e3b3dcd5b811 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -123,13 +123,13 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run) */ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) { - u32 hsr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); int ret = 0; run->exit_reason = KVM_EXIT_DEBUG; - run->debug.arch.hsr = hsr; + run->debug.arch.hsr = esr; - switch (ESR_ELx_EC(hsr)) { + switch (kvm_vcpu_trap_get_class(esr)) { case ESR_ELx_EC_WATCHPT_LOW: run->debug.arch.far = vcpu->arch.fault.far_el2; /* fall through */ @@ -139,8 +139,8 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) case ESR_ELx_EC_BRK64: break; default: - kvm_err("%s: un-handled case hsr: %#08x\n", - __func__, (unsigned int) hsr); + kvm_err("%s: un-handled case esr: %#08x\n", + __func__, (unsigned int)esr); ret = -1; break; } @@ -150,10 +150,10 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run) { - u32 hsr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_vcpu_get_esr(vcpu); - kvm_pr_unimpl("Unknown exception class: hsr: %#08x -- %s\n", - hsr, esr_get_class_string(hsr)); + kvm_pr_unimpl("Unknown exception class: esr: %#08x -- %s\n", + esr, esr_get_class_string(esr)); kvm_inject_undefined(vcpu); return 1; @@ -230,10 +230,10 @@ static exit_handle_fn arm_exit_handlers[] = { static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu) { - u32 hsr = kvm_vcpu_get_esr(vcpu); - u8 hsr_ec = ESR_ELx_EC(hsr); + u32 esr = kvm_vcpu_get_esr(vcpu); + u8 ec = kvm_vcpu_trap_get_class(esr); - return arm_exit_handlers[hsr_ec]; + return arm_exit_handlers[ec]; } /* @@ -273,15 +273,15 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, { if (ARM_SERROR_PENDING(exception_index)) { u32 esr = kvm_vcpu_get_esr(vcpu); - u8 hsr_ec = ESR_ELx_EC(esr); + u8 ec = kvm_vcpu_trap_get_class(esr); /* * HVC/SMC already have an adjusted PC, which we need * to correct in order to return to after having * injected the SError. */ - if (hsr_ec == ESR_ELx_EC_HVC32 || hsr_ec == ESR_ELx_EC_HVC64 || - hsr_ec == ESR_ELx_EC_SMC32 || hsr_ec == ESR_ELx_EC_SMC64) { + if (ec == ESR_ELx_EC_HVC32 || ec == ESR_ELx_EC_HVC64 || + ec == ESR_ELx_EC_SMC32 || ec == ESR_ELx_EC_SMC64) { u32 adj = kvm_vcpu_trap_il_is32bit(esr) ? 4 : 2; *vcpu_pc(vcpu) -= adj; } diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 369f22f49f3d..7bf4840bf90e 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -356,8 +356,8 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_esr(vcpu); + u8 ec = kvm_vcpu_trap_get_class(esr); bool vhe, sve_guest, sve_host; - u8 hsr_ec; if (!system_supports_fpsimd()) return false; @@ -372,14 +372,13 @@ static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) vhe = has_vhe(); } - hsr_ec = kvm_vcpu_trap_get_class(esr); - if (hsr_ec != ESR_ELx_EC_FP_ASIMD && - hsr_ec != ESR_ELx_EC_SVE) + if (ec != ESR_ELx_EC_FP_ASIMD && + ec != ESR_ELx_EC_SVE) return false; /* Don't handle SVE traps for non-SVE vcpus here: */ if (!sve_guest) - if (hsr_ec != ESR_ELx_EC_FP_ASIMD) + if (ec != ESR_ELx_EC_FP_ASIMD) return false; /* Valid trap. Switch the context: */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 012fff834a4b..58f81ab519af 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2182,10 +2182,10 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params) { u32 esr = kvm_vcpu_get_esr(vcpu); - u8 hsr_ec = kvm_vcpu_trap_get_class(esr); + u8 ec = kvm_vcpu_trap_get_class(esr); int cp = -1; - switch(hsr_ec) { + switch (ec) { case ESR_ELx_EC_CP15_32: case ESR_ELx_EC_CP15_64: cp = 15; @@ -2216,17 +2216,17 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu, size_t nr_specific) { struct sys_reg_params params; - u32 hsr = kvm_vcpu_get_esr(vcpu); - int Rt = kvm_vcpu_sys_get_rt(hsr); - int Rt2 = (hsr >> 10) & 0x1f; + u32 esr = kvm_vcpu_get_esr(vcpu); + int Rt = kvm_vcpu_sys_get_rt(esr); + int Rt2 = (esr >> 10) & 0x1f; params.is_aarch32 = true; params.is_32bit = false; - params.CRm = (hsr >> 1) & 0xf; - params.is_write = ((hsr & 1) == 0); + params.CRm = (esr >> 1) & 0xf; + params.is_write = ((esr & 1) == 0); params.Op0 = 0; - params.Op1 = (hsr >> 16) & 0xf; + params.Op1 = (esr >> 16) & 0xf; params.Op2 = 0; params.CRn = 0; @@ -2273,18 +2273,18 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, size_t nr_specific) { struct sys_reg_params params; - u32 hsr = kvm_vcpu_get_esr(vcpu); - int Rt = kvm_vcpu_sys_get_rt(hsr); + u32 esr = kvm_vcpu_get_esr(vcpu); + int Rt = kvm_vcpu_sys_get_rt(esr); params.is_aarch32 = true; params.is_32bit = true; - params.CRm = (hsr >> 1) & 0xf; + params.CRm = (esr >> 1) & 0xf; params.regval = vcpu_get_reg(vcpu, Rt); - params.is_write = ((hsr & 1) == 0); - params.CRn = (hsr >> 10) & 0xf; + params.is_write = ((esr & 1) == 0); + params.CRn = (esr >> 10) & 0xf; params.Op0 = 0; - params.Op1 = (hsr >> 14) & 0x7; - params.Op2 = (hsr >> 17) & 0x7; + params.Op1 = (esr >> 14) & 0x7; + params.Op2 = (esr >> 17) & 0x7; if (!emulate_cp(vcpu, ¶ms, target_specific, nr_specific) || !emulate_cp(vcpu, ¶ms, global, nr_global)) { From patchwork Fri May 8 03:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535421 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C4F014C0 for ; Fri, 8 May 2020 03:32:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 203C32085B for ; Fri, 8 May 2020 03:32:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="YpTCRf2T"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BHYgWdtO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 203C32085B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TjK/mYUxwaa9Z2i3X9Y0Rgkt5gQVlbvzSfAiyNdsPQU=; b=YpTCRf2T2eV+jI 8ox0y5iN4mFM0l9Yju36uTPVcu59zepc7ltDMMImSdG+8v7946nBBl7jK4JrBspHmjN541YIQtD1L 6wZOC7sR+BJK+Y7CC6YZxAnwFrYSRZw/e9dq0iogEApJAQjbckTtnDEYr5pMxz4QypK3uJjzwONeV zghFCELLjw5AGLtl8z44IlNDlB8lNDrWmr6wDJ494vjw2C9SXpej3YpJ5B7hlLYQ4sJ51rPwRR4Qb qhjP86rMsSHGbQsFyITzoMl86f9QOxqQdqYmorxAJGbpBEpScefMgtQyuxLrsOCxI+CwJJMGdtxt/ gHClxALBjjJU0Zk1/OCw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtkD-00035e-8S; Fri, 08 May 2020 03:32:13 +0000 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120] helo=us-smtp-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtj6-00029v-R5 for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:31:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908662; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uOxS8qUh1P601xhSbaR+Vu3Pd0b4Lw9cqUexthKtkJg=; b=BHYgWdtOtGvdau6o/gNuRzFjs2aqbI82AVkPgqi4F4ZrN9aXJ8baFnqE8dqgoyBxXHG6jp FUH6oNe9lUimWYCoTY+W5tJPfGlnjpAxMNyapXf/44OCfacD/JZGwLwlKm4ryW/b26LscZ 0z0XRTTrDUGxDd6yfIpYUdkQPPZeZd0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-188-U2Rao8xiNb2Hn3U1HK1l_A-1; Thu, 07 May 2020 23:30:59 -0400 X-MC-Unique: U2Rao8xiNb2Hn3U1HK1l_A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 38B041895A29; Fri, 8 May 2020 03:30:58 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8109699CF; Fri, 8 May 2020 03:30:51 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 6/9] kvm/arm64: Export kvm_handle_user_mem_abort() with prefault mode Date: Fri, 8 May 2020 13:29:16 +1000 Message-Id: <20200508032919.52147-7-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203104_988135_078E290B X-CRM114-Status: GOOD ( 11.85 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [205.139.110.120 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [205.139.110.120 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This renames user_mem_abort() to kvm_handle_user_mem_abort(), and then export it. The function will be used in asynchronous page fault to populate a page table entry once the corresponding page is populated from the backup device (e.g. swap partition): * Parameter @fault_status is replace by @esr. * The parameters are reorder based on their importance. This shouldn't cause any functional changes. Signed-off-by: Gavin Shan --- arch/arm64/include/asm/kvm_host.h | 4 ++++ virt/kvm/arm/mmu.c | 14 ++++++++------ 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 32c8a675e5a4..f77c706777ec 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -437,6 +437,10 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events); #define KVM_ARCH_WANT_MMU_NOTIFIER +int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr, + struct kvm_memory_slot *memslot, + phys_addr_t fault_ipa, unsigned long hva, + bool prefault); int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e462e0368fd9..95aaabb2b1fc 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1656,12 +1656,12 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, (hva & ~(map_size - 1)) + map_size <= uaddr_end; } -static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, - struct kvm_memory_slot *memslot, unsigned long hva, - unsigned long fault_status) +int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr, + struct kvm_memory_slot *memslot, + phys_addr_t fault_ipa, unsigned long hva, + bool prefault) { - int ret; - u32 esr = kvm_vcpu_get_esr(vcpu); + unsigned int fault_status = kvm_vcpu_trap_get_fault_type(esr); bool write_fault, writable, force_pte = false; bool exec_fault, needs_exec; unsigned long mmu_seq; @@ -1674,6 +1674,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); unsigned long vma_pagesize, flags = 0; + int ret; write_fault = kvm_is_write_fault(esr); exec_fault = kvm_vcpu_trap_is_iabt(esr); @@ -1995,7 +1996,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status); + ret = kvm_handle_user_mem_abort(vcpu, esr, memslot, + fault_ipa, hva, false); if (ret == 0) ret = 1; out: From patchwork Fri May 8 03:29:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535423 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B94FA913 for ; Fri, 8 May 2020 03:32:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 93E9420731 for ; Fri, 8 May 2020 03:32:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="MEzdQ+gN"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OnaEQ7/V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93E9420731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5LrGLFsZOhdGNxTJR9s6e2iQoXlPVnXUTLci2cO9+mk=; b=MEzdQ+gNT5Z70Y +cXBbH4j73AaLPEqoW35Fe8aRzO3uyTltigBH2vsVK1MOjgdBc+xTavKuII3xXo+ADwKpbp/IaiFZ dvTib2T2SCNt6f0m7QIpC2AXxBqq7V2dfynhVPAqdoAL25viXt9+A3DMqq9rZ4XRsfJ0eCP9HEAY6 y3yJzjLqaAdw95eYHmS5jFsmrNf2Qjv6XtYxcnwzoKbB7FVHsfuy2ZeFnD2u06WBMuhJjfAIhE/DW LiwfkyZCVXmmywY0TP6/8l2tNz9laWRLyueJ3YH4zhDE9hbDnrECOdqD4xacyd9wL2W/093BNv/Q3 V2cAf5oOfS7fnguZVS5Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtkW-0003LL-Cv; Fri, 08 May 2020 03:32:32 +0000 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120] helo=us-smtp-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjC-0002FW-JP for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:31:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1JW2QmxfM3dQHTg5w77I8MJwpXYVVNY3VWW9mn8DP/4=; b=OnaEQ7/VECNDcxRRec27159jSdS+3HskWmOXP4kTbBglTo5x5PoUWt9rT75iwvdGCqr8ln tgbWzHbshP91avC52ZOQefpK+wyl7ZARf/LjVKuWQJe/5/AgPNreyvt8whG8/8e2Ijw4iy 1zRBXv7rK747aLqaGI4LdD4TNl7bRIE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-325-aep5ud-rOFuNyR3w7LzG_Q-1; Thu, 07 May 2020 23:31:04 -0400 X-MC-Unique: aep5ud-rOFuNyR3w7LzG_Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0CE11461; Fri, 8 May 2020 03:31:03 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C6E799CF; Fri, 8 May 2020 03:30:58 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 7/9] kvm/arm64: Support async page fault Date: Fri, 8 May 2020 13:29:17 +1000 Message-Id: <20200508032919.52147-8-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203110_826708_F136B572 X-CRM114-Status: GOOD ( 28.46 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [207.211.31.120 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [207.211.31.120 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org There are two stages of fault pages and the stage one page fault is handled by guest itself. The guest is trapped to host when the page fault is caused by stage 2 page table, for example missing. The guest is suspended until the requested page is populated. To populate the requested page can be related to IO activities if the page was swapped out previously. In this case, the guest has to suspend for a few of milliseconds at least, regardless of the overall system load. There is no useful work done during the suspended period from guest's view. This adds asychornous page fault to improve the situation. A signal (PAGE_NOT_PRESENT) is sent to guest if the requested page needs some time to be populated. Guest might reschedule to another running process if possible. Otherwise, the vCPU is put into power-saving mode, which is actually to cause vCPU reschedule from host's view. A followup signal (PAGE_READY) is sent to guest once the requested page is populated. The suspended task is waken up or scheduled when guest receives the signal. With this mechanism, the vCPU won't be stuck when the requested page is being populated by host. There are more details highlighted as below. Note the implementation is similar to what x86 has to some extent: * A dedicated SMCCC ID is reserved to enable, disable or configure the functionality. The only 64-bits parameter is conveyed by two registers (w2/w1). Bits[63:56] is the bitmap used to specify the operated functionality like enabling/disabling/configuration. The bits[55:6] is the physical address of control block or external data abort injection disallowed region. Bit[5:0] are used to pass control flags. * Signal (PAGE_NOT_PRESENT) is sent to guest if the requested page isn't ready. In the mean while, a work is started to populate the page asynchronously in background. The stage 2 page table entry is updated accordingly and another signal (PAGE_READY) is fired after the request page is populted. The signals is notified by injected data abort fault. * The signals are fired and consumed in sequential fashion. It means no more signals will be fired if there is pending one, awaiting the guest to consume. It's because the injected data abort faults have to be done in sequential fashion. Signed-off-by: Gavin Shan --- arch/arm64/include/asm/kvm_host.h | 43 ++++ arch/arm64/include/asm/kvm_para.h | 27 ++ arch/arm64/include/uapi/asm/Kbuild | 2 - arch/arm64/include/uapi/asm/kvm_para.h | 22 ++ arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/Makefile | 2 + include/linux/arm-smccc.h | 6 + virt/kvm/arm/arm.c | 36 ++- virt/kvm/arm/async_pf.c | 335 +++++++++++++++++++++++++ virt/kvm/arm/hypercalls.c | 8 + virt/kvm/arm/mmu.c | 29 ++- 11 files changed, 506 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_para.h create mode 100644 arch/arm64/include/uapi/asm/kvm_para.h create mode 100644 virt/kvm/arm/async_pf.c diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f77c706777ec..a207728d6f3f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -250,6 +250,23 @@ struct vcpu_reset_state { bool reset; }; +#ifdef CONFIG_KVM_ASYNC_PF + +/* Should be a power of two number */ +#define ASYNC_PF_PER_VCPU 64 + +/* + * The association of gfn and token. The token will be sent to guest as + * page fault address. Also, the guest could be in aarch32 mode. So its + * length should be 32-bits. + */ +struct kvm_arch_async_pf { + u32 token; + gfn_t gfn; + u32 esr; +}; +#endif /* CONFIG_KVM_ASYNC_PF */ + struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; void *sve_state; @@ -351,6 +368,17 @@ struct kvm_vcpu_arch { u64 last_steal; gpa_t base; } steal; + +#ifdef CONFIG_KVM_ASYNC_PF + struct { + struct gfn_to_hva_cache cache; + gfn_t gfns[ASYNC_PF_PER_VCPU]; + u64 control_block; + u16 id; + bool send_user_only; + u64 no_fault_inst_range; + } apf; +#endif }; /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ @@ -604,6 +632,21 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, static inline void __cpu_init_stage2(void) {} +#ifdef CONFIG_KVM_ASYNC_PF +bool kvm_async_pf_hash_find(struct kvm_vcpu *vcpu, gfn_t gfn); +bool kvm_arch_can_inject_async_page_not_present(struct kvm_vcpu *vcpu); +bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu); +int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, u32 esr, + gpa_t gpa, gfn_t gfn); +void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work); +void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work); +void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work); +long kvm_async_pf_hypercall(struct kvm_vcpu *vcpu); +#endif /* CONFIG_KVM_ASYNC_PF */ + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_para.h b/arch/arm64/include/asm/kvm_para.h new file mode 100644 index 000000000000..0ea481dd1c7a --- /dev/null +++ b/arch/arm64/include/asm/kvm_para.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM_KVM_PARA_H +#define _ASM_ARM_KVM_PARA_H + +#include + +static inline bool kvm_check_and_clear_guest_paused(void) +{ + return false; +} + +static inline unsigned int kvm_arch_para_features(void) +{ + return 0; +} + +static inline unsigned int kvm_arch_para_hints(void) +{ + return 0; +} + +static inline bool kvm_para_available(void) +{ + return false; +} + +#endif /* _ASM_ARM_KVM_PARA_H */ diff --git a/arch/arm64/include/uapi/asm/Kbuild b/arch/arm64/include/uapi/asm/Kbuild index 602d137932dc..f66554cd5c45 100644 --- a/arch/arm64/include/uapi/asm/Kbuild +++ b/arch/arm64/include/uapi/asm/Kbuild @@ -1,3 +1 @@ # SPDX-License-Identifier: GPL-2.0 - -generic-y += kvm_para.h diff --git a/arch/arm64/include/uapi/asm/kvm_para.h b/arch/arm64/include/uapi/asm/kvm_para.h new file mode 100644 index 000000000000..e0bd0e579b9a --- /dev/null +++ b/arch/arm64/include/uapi/asm/kvm_para.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _UAPI_ASM_ARM_KVM_PARA_H +#define _UAPI_ASM_ARM_KVM_PARA_H + +#include + +#define KVM_FEATURE_ASYNC_PF 0 + +/* Async PF */ +#define KVM_ASYNC_PF_ENABLED (1 << 0) +#define KVM_ASYNC_PF_SEND_ALWAYS (1 << 1) + +#define KVM_PV_REASON_PAGE_NOT_PRESENT 1 +#define KVM_PV_REASON_PAGE_READY 2 + +struct kvm_vcpu_pv_apf_data { + __u32 reason; + __u8 pad[60]; + __u32 enabled; +}; + +#endif /* _UAPI_ASM_ARM_KVM_PARA_H */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 449386d76441..1053e16b1739 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -34,6 +34,7 @@ config KVM select KVM_VFIO select HAVE_KVM_EVENTFD select HAVE_KVM_IRQFD + select KVM_ASYNC_PF select KVM_ARM_PMU if HW_PERF_EVENTS select HAVE_KVM_MSI select HAVE_KVM_IRQCHIP diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 5ffbdc39e780..3be24c1e401f 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -37,3 +37,5 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-debug.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/irqchip.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o +kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o +kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/arm/async_pf.o diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index bdc0124a064a..22007dd3b9f0 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -94,6 +94,7 @@ /* KVM "vendor specific" services */ #define ARM_SMCCC_KVM_FUNC_FEATURES 0 +#define ARM_SMCCC_KVM_FUNC_APF 1 #define ARM_SMCCC_KVM_FUNC_FEATURES_2 127 #define ARM_SMCCC_KVM_NUM_FUNCS 128 @@ -102,6 +103,11 @@ ARM_SMCCC_SMC_32, \ ARM_SMCCC_OWNER_VENDOR_HYP, \ ARM_SMCCC_KVM_FUNC_FEATURES) +#define ARM_SMCCC_VENDOR_HYP_KVM_APF_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_APF) #ifndef __ASSEMBLY__ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 2cbb57485760..3f62899cef13 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -222,6 +222,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) */ r = 1; break; +#ifdef CONFIG_KVM_ASYNC_PF + case KVM_CAP_ASYNC_PF: + r = 1; + break; +#endif default: r = kvm_arch_vm_ioctl_check_extension(kvm, ext); break; @@ -269,6 +274,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Force users to call KVM_ARM_VCPU_INIT */ vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); +#ifdef CONFIG_KVM_ASYNC_PF + vcpu->arch.apf.control_block = 0UL; + vcpu->arch.apf.no_fault_inst_range = 0x800; +#endif /* Set up the timer */ kvm_timer_vcpu_init(vcpu); @@ -426,8 +435,27 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) { bool irq_lines = *vcpu_hcr(v) & (HCR_VI | HCR_VF); - return ((irq_lines || kvm_vgic_vcpu_pending_irq(v)) - && !v->arch.power_off && !v->arch.pause); + + if ((irq_lines || kvm_vgic_vcpu_pending_irq(v)) && + !v->arch.power_off && !v->arch.pause) + return true; + +#ifdef CONFIG_KVM_ASYNC_PF + if (v->arch.apf.control_block & KVM_ASYNC_PF_ENABLED) { + u32 val; + int ret; + + if (!list_empty_careful(&v->async_pf.done)) + return true; + + ret = kvm_read_guest_cached(v->kvm, &v->arch.apf.cache, + &val, sizeof(val)); + if (ret || val) + return true; + } +#endif + + return false; } bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) @@ -683,6 +711,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) check_vcpu_requests(vcpu); +#ifdef CONFIG_KVM_ASYNC_PF + kvm_check_async_pf_completion(vcpu); +#endif + /* * Preparing the interrupts to be injected also * involves poking the GIC, which must be done in a diff --git a/virt/kvm/arm/async_pf.c b/virt/kvm/arm/async_pf.c new file mode 100644 index 000000000000..5be49d684de3 --- /dev/null +++ b/virt/kvm/arm/async_pf.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Asynchronous Page Fault Support + * + * Copyright (C) 2020 Red Hat, Inc., Gavin Shan + * + * Based on arch/x86/kernel/kvm.c + */ + +#include +#include +#include +#include + +static inline u32 kvm_async_pf_hash_fn(gfn_t gfn) +{ + return hash_32(gfn & 0xffffffff, order_base_2(ASYNC_PF_PER_VCPU)); +} + +static inline u32 kvm_async_pf_hash_next(u32 key) +{ + return (key + 1) & (ASYNC_PF_PER_VCPU - 1); +} + +static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu) +{ + int i; + + for (i = 0; i < ASYNC_PF_PER_VCPU; i++) + vcpu->arch.apf.gfns[i] = ~0; +} + +/* + * Add gfn to the hash table. It's ensured there is a free entry + * when this function is called. + */ +static void kvm_async_pf_hash_add(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + u32 key = kvm_async_pf_hash_fn(gfn); + + while (vcpu->arch.apf.gfns[key] != ~0) + key = kvm_async_pf_hash_next(key); + + vcpu->arch.apf.gfns[key] = gfn; +} + +static u32 kvm_async_pf_hash_slot(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + u32 key = kvm_async_pf_hash_fn(gfn); + int i; + + for (i = 0; i < ASYNC_PF_PER_VCPU; i++) { + if (vcpu->arch.apf.gfns[key] == gfn || + vcpu->arch.apf.gfns[key] == ~0) + break; + + key = kvm_async_pf_hash_next(key); + } + + return key; +} + +static void kvm_async_pf_hash_remove(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + u32 i, j, k; + + i = j = kvm_async_pf_hash_slot(vcpu, gfn); + while (true) { + vcpu->arch.apf.gfns[i] = ~0; + + do { + j = kvm_async_pf_hash_next(j); + if (vcpu->arch.apf.gfns[j] == ~0) + return; + + k = kvm_async_pf_hash_fn(vcpu->arch.apf.gfns[j]); + /* + * k lies cyclically in ]i,j] + * | i.k.j | + * |....j i.k.| or |.k..j i...| + */ + } while ((i <= j) ? (i < k && k <= j) : (i < k || k <= j)); + + vcpu->arch.apf.gfns[i] = vcpu->arch.apf.gfns[j]; + i = j; + } +} + +bool kvm_async_pf_hash_find(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + u32 key = kvm_async_pf_hash_slot(vcpu, gfn); + + return vcpu->arch.apf.gfns[key] == gfn; +} + +static inline int kvm_async_pf_read_cache(struct kvm_vcpu *vcpu, u32 *val) +{ + return kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.apf.cache, + val, sizeof(*val)); +} + +static inline int kvm_async_pf_write_cache(struct kvm_vcpu *vcpu, u32 val) +{ + return kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.apf.cache, + &val, sizeof(val)); +} + +bool kvm_arch_can_inject_async_page_not_present(struct kvm_vcpu *vcpu) +{ + u64 vbar, pc; + u32 val; + int ret; + + if (!(vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED)) + return false; + + if (vcpu->arch.apf.send_user_only && vcpu_mode_priv(vcpu)) + return false; + + /* Pending page fault, which ins't acknowledged by guest */ + ret = kvm_async_pf_read_cache(vcpu, &val); + if (ret || val) + return false; + + /* + * Events can't be injected through data abort because it's + * going to clobber ELR_EL1, which might not consued (or saved) + * by guest yet. + */ + vbar = vcpu_read_sys_reg(vcpu, VBAR_EL1); + pc = *vcpu_pc(vcpu); + if (pc >= vbar && pc < (vbar + vcpu->arch.apf.no_fault_inst_range)) + return false; + + return true; +} + +/* + * We need deliver the page present signal as quick as possible because + * it's performance critical. So the signal is delivered no matter which + * privilege level the guest has. It's possible the signal can't be handled + * by the guest immediately. However, host doesn't contribute the delay + * anyway. + */ +bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu) +{ + u64 vbar, pc; + u32 val; + int ret; + + if (!(vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED)) + return true; + + /* Pending page fault, which ins't acknowledged by guest */ + ret = kvm_async_pf_read_cache(vcpu, &val); + if (ret || val) + return false; + + /* + * Events can't be injected through data abort because it's + * going to clobber ELR_EL1, which might not consued (or saved) + * by guest yet. + */ + vbar = vcpu_read_sys_reg(vcpu, VBAR_EL1); + pc = *vcpu_pc(vcpu); + if (pc >= vbar && pc < (vbar + vcpu->arch.apf.no_fault_inst_range)) + return false; + + return true; +} + +int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, u32 esr, + gpa_t gpa, gfn_t gfn) +{ + struct kvm_arch_async_pf arch; + unsigned long hva = kvm_vcpu_gfn_to_hva(vcpu, gfn); + + arch.token = (vcpu->arch.apf.id++ << 16) | vcpu->vcpu_id; + arch.gfn = gfn; + arch.esr = esr; + + return kvm_setup_async_pf(vcpu, gpa, hva, &arch); +} + +/* + * It's garanteed that no pending asynchronous page fault when this is + * called. It means all previous issued asynchronous page faults have + * been acknoledged. + */ +void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work) +{ + int ret; + + kvm_async_pf_hash_add(vcpu, work->arch.gfn); + ret = kvm_async_pf_write_cache(vcpu, KVM_PV_REASON_PAGE_NOT_PRESENT); + if (ret) { + kvm_err("%s: Error %d writing cache\n", __func__, ret); + kvm_async_pf_hash_remove(vcpu, work->arch.gfn); + return; + } + + kvm_inject_dabt(vcpu, work->arch.token); +} + +/* + * It's garanteed that no pending asynchronous page fault when this is + * called. It means all previous issued asynchronous page faults have + * been acknoledged. + */ +void kvm_arch_async_page_present(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work) +{ + int ret; + + /* Broadcast wakeup */ + if (work->wakeup_all) + work->arch.token = ~0; + else + kvm_async_pf_hash_remove(vcpu, work->arch.gfn); + + ret = kvm_async_pf_write_cache(vcpu, KVM_PV_REASON_PAGE_READY); + if (ret) { + kvm_err("%s: Error %d writing cache\n", __func__, ret); + return; + } + + kvm_inject_dabt(vcpu, work->arch.token); +} + +void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, + struct kvm_async_pf *work) +{ + struct kvm_memory_slot *memslot; + unsigned int esr = work->arch.esr; + phys_addr_t gpa = work->cr2_or_gpa; + gfn_t gfn = gpa >> PAGE_SHIFT; + unsigned long hva; + bool write_fault, writable; + int idx; + + /* + * We shouldn't issue prefault for special work to wake up + * all pending tasks because the associated token (address) + * is invalid. + */ + if (work->wakeup_all) + return; + + /* + * The gpa was validated before the work is started. However, the + * memory slots might be changed since then. So we need to redo the + * validatation here. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + + write_fault = kvm_is_write_fault(esr); + memslot = gfn_to_memslot(vcpu->kvm, gfn); + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); + if (kvm_is_error_hva(hva) || (write_fault && !writable)) + goto out; + + kvm_handle_user_mem_abort(vcpu, esr, memslot, gpa, hva, true); + +out: + srcu_read_unlock(&vcpu->kvm->srcu, idx); +} + +static long kvm_async_pf_update_enable_reg(struct kvm_vcpu *vcpu, u64 data) +{ + bool enabled, enable; + gpa_t gpa = (data & ~0x3F); + int ret; + + enabled = !!(vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED); + enable = !!(data & KVM_ASYNC_PF_ENABLED); + if (enable == enabled) { + kvm_debug("%s: Async PF has been %s (0x%llx -> 0x%llx)\n", + __func__, enabled ? "enabled" : "disabled", + vcpu->arch.apf.control_block, data); + return SMCCC_RET_NOT_REQUIRED; + } + + if (enable) { + ret = kvm_gfn_to_hva_cache_init( + vcpu->kvm, &vcpu->arch.apf.cache, + gpa + offsetof(struct kvm_vcpu_pv_apf_data, reason), + sizeof(u32)); + if (ret) { + kvm_err("%s: Error %d initializing cache on 0x%llx\n", + __func__, ret, data); + return SMCCC_RET_NOT_SUPPORTED; + } + + kvm_async_pf_hash_reset(vcpu); + vcpu->arch.apf.send_user_only = + !(data & KVM_ASYNC_PF_SEND_ALWAYS); + kvm_async_pf_wakeup_all(vcpu); + vcpu->arch.apf.control_block = data; + } else { + kvm_clear_async_pf_completion_queue(vcpu); + vcpu->arch.apf.control_block = data; + } + + return SMCCC_RET_SUCCESS; +} + +long kvm_async_pf_hypercall(struct kvm_vcpu *vcpu) +{ + u64 data, func, val, range; + long ret = SMCCC_RET_SUCCESS; + + data = (smccc_get_arg2(vcpu) << 32) | smccc_get_arg1(vcpu); + func = data & (0xfful << 56); + val = data & ~(0xfful << 56); + switch (func) { + case BIT(63): + ret = kvm_async_pf_update_enable_reg(vcpu, val); + break; + case BIT(62): + if (vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED) { + ret = SMCCC_RET_NOT_SUPPORTED; + break; + } + + range = vcpu->arch.apf.no_fault_inst_range; + vcpu->arch.apf.no_fault_inst_range = max(range, val); + break; + default: + kvm_err("%s: Unrecognized function 0x%llx\n", __func__, func); + ret = SMCCC_RET_NOT_SUPPORTED; + } + + return ret; +} diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c index db6dce3d0e23..a7e0fe17e2f1 100644 --- a/virt/kvm/arm/hypercalls.c +++ b/virt/kvm/arm/hypercalls.c @@ -70,7 +70,15 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) break; case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID: val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); +#ifdef CONFIG_KVM_ASYNC_PF + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_APF); +#endif break; +#ifdef CONFIG_KVM_ASYNC_PF + case ARM_SMCCC_VENDOR_HYP_KVM_APF_FUNC_ID: + val[0] = kvm_async_pf_hypercall(vcpu); + break; +#endif default: return kvm_psci_call(vcpu); } diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 95aaabb2b1fc..a303815845a2 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1656,6 +1656,30 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, (hva & ~(map_size - 1)) + map_size <= uaddr_end; } +static bool try_async_pf(struct kvm_vcpu *vcpu, u32 esr, gpa_t gpa, + gfn_t gfn, kvm_pfn_t *pfn, bool write, + bool *writable, bool prefault) +{ + struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); +#ifdef CONFIG_KVM_ASYNC_PF + bool async = false; + + /* Bail if *pfn has correct page */ + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); + if (!async) + return false; + + if (!prefault && kvm_arch_can_inject_async_page_not_present(vcpu)) { + if (kvm_async_pf_hash_find(vcpu, gfn) || + kvm_arch_setup_async_pf(vcpu, esr, gpa, gfn)) + return true; + } +#endif + + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); + return false; +} + int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr, struct kvm_memory_slot *memslot, phys_addr_t fault_ipa, unsigned long hva, @@ -1737,7 +1761,10 @@ int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr, */ smp_rmb(); - pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable); + if (try_async_pf(vcpu, esr, fault_ipa, gfn, &pfn, + write_fault, &writable, prefault)) + return 1; + if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; From patchwork Fri May 8 03:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535425 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BB3C913 for ; Fri, 8 May 2020 03:32:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A7512085B for ; Fri, 8 May 2020 03:32:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Ppj1/7ST"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PBNLV2iZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A7512085B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BcQXN2Q6ttx+19eHfT9c+OsoDfO2TMV8rtH4pBvprzo=; b=Ppj1/7STXL2/DQ yq7aQHhDGYZo0oRGjZGIs6G3ZNluhIoAg58RLzRe6cdtnkjKqG+HLRiHvVvDNzojsvhHbE8E9cKFV pijJzcEDXfaQ8NIOHCs1fLKog/QAczDbnWB97lwAV9UlUSiLAtJBkGcAixCQmRN7kHsyLnOt5BYkJ tvnZbtwRpWKK9vunJmnI99aSJx/MW/LjkfBpVDlR6E6TbtuivopSzdGf3Dajk+Fo5QiTrC22W/xRH BGDZoC33XkzbcfFzTCBTrFXqdv3O9zH5G2YVucw6y6rEDQQC31suNcEnCTD6RanaWxmZ3DOAaB3rA LqYhqNA+xnCMda1a6Zzg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtks-0003qc-FM; Fri, 08 May 2020 03:32:54 +0000 Received: from us-smtp-2.mimecast.com ([207.211.31.81] helo=us-smtp-delivery-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjI-0002KF-JT for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:31:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908674; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E8MHh9dyvpe8bslvZNsUP5j2B3v4NSqfl6A+NT0wxD4=; b=PBNLV2iZmfz3CoCQMxxp8HKLsT6+Uie1GKKGg2VXsH8XOkkkpt9OI2lv8kYtsdCWZu51Hu c9BeP5u3XHCPOhlPRlem8JMoyzgK35JJmFvjABqF8+xqXrvx0za+eCRbqLMRKGjVanYJYZ U3y46Gybq7utj4h4lHAEtuufm+fPGZM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-259-kwm_sW8LPE6l4dM0Y68Z0w-1; Thu, 07 May 2020 23:31:13 -0400 X-MC-Unique: kwm_sW8LPE6l4dM0Y68Z0w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6762D107ACCA; Fri, 8 May 2020 03:31:11 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id 783242E052; Fri, 8 May 2020 03:31:03 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 8/9] kernel/sched: Add cpu_rq_is_locked() Date: Fri, 8 May 2020 13:29:18 +1000 Message-Id: <20200508032919.52147-9-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203116_744675_55C85C18 X-CRM114-Status: GOOD ( 11.32 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [207.211.31.81 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [207.211.31.81 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This adds API cpu_rq_is_locked() to check if the CPU's runqueue has been locked or not. It's used in the subsequent patch to determine the task wakeup should be executed immediately or delayed. Signed-off-by: Gavin Shan --- include/linux/sched.h | 1 + kernel/sched/core.c | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4418f5cb8324..e68882443da7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1691,6 +1691,7 @@ extern struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *n */ extern struct task_struct *find_get_task_by_vpid(pid_t nr); +extern bool cpu_rq_is_locked(int cpu); extern int wake_up_state(struct task_struct *tsk, unsigned int state); extern int wake_up_process(struct task_struct *tsk); extern void wake_up_new_task(struct task_struct *tsk); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9a2fbf98fd6f..30f4a8845495 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -73,6 +73,14 @@ __read_mostly int scheduler_running; */ int sysctl_sched_rt_runtime = 950000; +bool cpu_rq_is_locked(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + + return raw_spin_is_locked(&rq->lock) ? true : false; +} +EXPORT_SYMBOL_GPL(cpu_rq_is_locked); + /* * __task_rq_lock - lock the rq @p resides on. */ From patchwork Fri May 8 03:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 11535427 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52535913 for ; Fri, 8 May 2020 03:33:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 19CE320731 for ; Fri, 8 May 2020 03:33:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Q5n2vHEg"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KIJHAMt3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19CE320731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lfrMzGgs7es35AZOPLgCkcuv8j5iSfFfZhYveUfKfHo=; b=Q5n2vHEgtz+i2q +s9pfjrQfuzE6Zz/xaTbbuzXsCiAV5I3tbVBwSudU0HGJsk1lfSavzzrzPFiHcdt+TM8FzEjGDeEc X6AaKFBaWgmRIzmgWPkbxvaFv2FuUOOINpebZJgI4VxsZSmH//WDaHS1bEuZ5Cgc8D1wyt+tbflfi TzfymPawFM6aAHjt/JQtiz0b+/bU8qfTnxC8hh5XbXvvmHIA561mDK1zt9/BPraLX9N8wAVj+Yyry sqS0c6vvduYAdj5fv3V9MTlSBjBXWS3OsLjSoF3MGHaVAli4L+oUAV6Ie9+a8gXBG/ULxv5FZgPE0 s/d6U60m/GtUgipA2Rhg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtlA-00048P-Ht; Fri, 08 May 2020 03:33:12 +0000 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120] helo=us-smtp-1.mimecast.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWtjO-0002OV-NJ for linux-arm-kernel@lists.infradead.org; Fri, 08 May 2020 03:31:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588908681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uBkKHh/qpLr11wIcCOtC7b6njssPEoWR96mn6evo82A=; b=KIJHAMt37PoVGVFmbl0FiEyP8Iv69AzoXd8PmiyNzaP3vkl9j7wyhS9OHykZYkzI5XuAas WtpxTu4u+DUnS2qJAyMvt7LseYsN7FNPHJx9iHO1RhPx862Ca6aw/8rNNrUrYwpTVWz7Nz qMaOEGWBngv3swrrbKKn+9bt9FFFM9Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-17-94fY9OJHPvOK2ffbcd23pQ-1; Thu, 07 May 2020 23:31:17 -0400 X-MC-Unique: 94fY9OJHPvOK2ffbcd23pQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED3F9107ACCA; Fri, 8 May 2020 03:31:15 +0000 (UTC) Received: from localhost.localdomain.com (vpn2-54-199.bne.redhat.com [10.64.54.199]) by smtp.corp.redhat.com (Postfix) with ESMTP id CCD952E045; Fri, 8 May 2020 03:31:11 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Subject: [PATCH RFCv2 9/9] arm64: Support async page fault Date: Fri, 8 May 2020 13:29:19 +1000 Message-Id: <20200508032919.52147-10-gshan@redhat.com> In-Reply-To: <20200508032919.52147-1-gshan@redhat.com> References: <20200508032919.52147-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_203123_059315_A5CE10B0 X-CRM114-Status: GOOD ( 25.32 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [207.211.31.120 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [207.211.31.120 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com, maz@kernel.org, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This supports asynchronous page fault for the guest. The design is similar to what x86 has: on receiving a PAGE_NOT_PRESENT signal from the host, the current task is either rescheduled or put into power saving mode. The task will be waken up when PAGE_READY signal is received. The PAGE_READY signal might be received in the context of the suspended process, to be waken up. That means the suspended process has to wake up itself, but it's not safe and prone to cause dead-lock on CPU runqueue lock. So the wakeup is delayed on returning from kernel space to user space or idle process is picked for running. The signals are conveyed through the async page fault control block, which was passed to host on enabling the functionality. On each page fault, the control block is checked and switch to the async page fault handling flow if any signals exist. The feature is put into the CONFIG_KVM_GUEST umbrella, which is added by this patch. So we have inline functions implemented in kvm_para.h, like other architectures do, to check if async page fault (one of the KVM para-virtualized features) is available. Also, the kernel boot parameter "no-kvmapf" can be specified to disable the feature. Signed-off-by: Gavin Shan --- arch/arm64/Kconfig | 11 + arch/arm64/include/asm/exception.h | 3 + arch/arm64/include/asm/kvm_para.h | 27 +- arch/arm64/kernel/entry.S | 33 +++ arch/arm64/kernel/process.c | 4 + arch/arm64/mm/fault.c | 434 +++++++++++++++++++++++++++++ 6 files changed, 505 insertions(+), 7 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 40fb05d96c60..2d5e5ee62d6d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1045,6 +1045,17 @@ config PARAVIRT under a hypervisor, potentially improving performance significantly over full virtualization. +config KVM_GUEST + bool "KVM Guest Support" + depends on PARAVIRT + default y + help + This option enables various optimizations for running under the KVM + hypervisor. Overhead for the kernel when not running inside KVM should + be minimal. + + In case of doubt, say Y + config PARAVIRT_TIME_ACCOUNTING bool "Paravirtual steal time accounting" select PARAVIRT diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h index 7a6e81ca23a8..d878afa42746 100644 --- a/arch/arm64/include/asm/exception.h +++ b/arch/arm64/include/asm/exception.h @@ -46,4 +46,7 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); void do_cp15instr(unsigned int esr, struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs); +#ifdef CONFIG_KVM_GUEST +void kvm_async_pf_delayed_wake(void); +#endif #endif /* __ASM_EXCEPTION_H */ diff --git a/arch/arm64/include/asm/kvm_para.h b/arch/arm64/include/asm/kvm_para.h index 0ea481dd1c7a..b2f8ef243df7 100644 --- a/arch/arm64/include/asm/kvm_para.h +++ b/arch/arm64/include/asm/kvm_para.h @@ -3,6 +3,20 @@ #define _ASM_ARM_KVM_PARA_H #include +#include +#include + +#ifdef CONFIG_KVM_GUEST +static inline int kvm_para_available(void) +{ + return 1; +} +#else +static inline int kvm_para_available(void) +{ + return 0; +} +#endif /* CONFIG_KVM_GUEST */ static inline bool kvm_check_and_clear_guest_paused(void) { @@ -11,17 +25,16 @@ static inline bool kvm_check_and_clear_guest_paused(void) static inline unsigned int kvm_arch_para_features(void) { - return 0; + unsigned int features = 0; + + if (kvm_arm_hyp_service_available(ARM_SMCCC_KVM_FUNC_APF)) + features |= (1 << KVM_FEATURE_ASYNC_PF); + + return features; } static inline unsigned int kvm_arch_para_hints(void) { return 0; } - -static inline bool kvm_para_available(void) -{ - return false; -} - #endif /* _ASM_ARM_KVM_PARA_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ddcde093c433..15efd57129ff 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -751,12 +751,45 @@ finish_ret_to_user: enable_step_tsk x1, x2 #ifdef CONFIG_GCC_PLUGIN_STACKLEAK bl stackleak_erase +#endif +#ifdef CONFIG_KVM_GUEST + bl kvm_async_pf_delayed_wake #endif kernel_exit 0 ENDPROC(ret_to_user) .popsection // .entry.text +#ifdef CONFIG_KVM_GUEST + .pushsection ".rodata", "a" +SYM_DATA_START(__exception_handlers_offset) + .quad 0 + .quad 0 + .quad 0 + .quad 0 + .quad el1_sync - vectors + .quad el1_irq - vectors + .quad 0 + .quad el1_error - vectors + .quad el0_sync - vectors + .quad el0_irq - vectors + .quad 0 + .quad el0_error - vectors +#ifdef CONFIG_COMPAT + .quad el0_sync_compat - vectors + .quad el0_irq_compat - vectors + .quad 0 + .quad el0_error_compat - vectors +#else + .quad 0 + .quad 0 + .quad 0 + .quad 0 +#endif +SYM_DATA_END(__exception_handlers_offset) + .popsection // .rodata +#endif /* CONFIG_KVM_GUEST */ + #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 /* * Exception vectors trampoline. diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 56be4cbf771f..5e7ee553566d 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -53,6 +53,7 @@ #include #include #include +#include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) #include @@ -70,6 +71,9 @@ void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); static void __cpu_do_idle(void) { +#ifdef CONFIG_KVM_GUEST + kvm_async_pf_delayed_wake(); +#endif dsb(sy); wfi(); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c9cedc0432d2..cbf8b52135c9 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -19,10 +19,12 @@ #include #include #include +#include #include #include #include #include +#include #include #include @@ -48,8 +50,31 @@ struct fault_info { const char *name; }; +#ifdef CONFIG_KVM_GUEST +struct kvm_task_sleep_node { + struct hlist_node link; + struct swait_queue_head wq; + u32 token; + struct task_struct *task; + int cpu; + bool halted; + bool delayed; +}; + +struct kvm_task_sleep_head { + raw_spinlock_t lock; + struct hlist_head list; +}; +#endif /* CONFIG_KVM_GUEST */ + static const struct fault_info fault_info[]; static struct fault_info debug_fault_info[]; +#ifdef CONFIG_KVM_GUEST +extern char __exception_handlers_offset[]; +static bool async_pf_available = true; +static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_data) __aligned(64); +static DEFINE_PER_CPU(struct kvm_task_sleep_head, apf_head); +#endif static inline const struct fault_info *esr_to_fault_info(unsigned int esr) { @@ -717,10 +742,281 @@ static const struct fault_info fault_info[] = { { do_bad, SIGKILL, SI_KERNEL, "unknown 63" }, }; +#ifdef CONFIG_KVM_GUEST +static inline unsigned int kvm_async_pf_read_enabled(void) +{ + return __this_cpu_read(apf_data.enabled); +} + +static inline void kvm_async_pf_write_enabled(unsigned int val) +{ + __this_cpu_write(apf_data.enabled, val); +} + +static inline unsigned int kvm_async_pf_read_reason(void) +{ + return __this_cpu_read(apf_data.reason); +} + +static inline void kvm_async_pf_write_reason(unsigned int val) +{ + __this_cpu_write(apf_data.reason, val); +} + +#define kvm_async_pf_lock(b, flags) \ + raw_spin_lock_irqsave(&(b)->lock, (flags)) +#define kvm_async_pf_trylock(b, flags) \ + raw_spin_trylock_irqsave(&(b)->lock, (flags)) +#define kvm_async_pf_unlock(b, flags) \ + raw_spin_unlock_irqrestore(&(b)->lock, (flags)) +#define kvm_async_pf_unlock_and_clear(b, flags) \ + do { \ + raw_spin_unlock_irqrestore(&(b)->lock, (flags)); \ + kvm_async_pf_write_reason(0); \ + } while (0) + +static struct kvm_task_sleep_node *kvm_async_pf_find( + struct kvm_task_sleep_head *b, u32 token) +{ + struct kvm_task_sleep_node *n; + struct hlist_node *p; + + hlist_for_each(p, &b->list) { + n = hlist_entry(p, typeof(*n), link); + if (n->token == token) + return n; + } + + return NULL; +} + +static void kvm_async_pf_wait(u32 token, int in_kernel) +{ + struct kvm_task_sleep_head *b = this_cpu_ptr(&apf_head); + struct kvm_task_sleep_node n, *e; + DECLARE_SWAITQUEUE(wait); + unsigned long flags; + + kvm_async_pf_lock(b, flags); + e = kvm_async_pf_find(b, token); + if (e) { + /* dummy entry exist -> wake up was delivered ahead of PF */ + hlist_del(&e->link); + kfree(e); + kvm_async_pf_unlock_and_clear(b, flags); + + return; + } + + n.token = token; + n.task = current; + n.cpu = smp_processor_id(); + n.halted = is_idle_task(current) || + (IS_ENABLED(CONFIG_PREEMPT_COUNT) ? + preempt_count() > 1 || rcu_preempt_depth() : in_kernel); + n.delayed = false; + init_swait_queue_head(&n.wq); + hlist_add_head(&n.link, &b->list); + kvm_async_pf_unlock_and_clear(b, flags); + + for (;;) { + if (!n.halted) { + prepare_to_swait_exclusive(&n.wq, &wait, + TASK_UNINTERRUPTIBLE); + } + + if (hlist_unhashed(&n.link)) + break; + + if (!n.halted) { + schedule(); + } else { + dsb(sy); + wfi(); + } + } + + if (!n.halted) + finish_swait(&n.wq, &wait); +} + +/* + * There are two cases the suspended processed can't be waken up + * immediately: The waker is exactly the suspended process, or + * the current CPU runqueue has been locked. Otherwise, we might + * run into dead-lock. + */ +static inline void kvm_async_pf_wake_one(struct kvm_task_sleep_node *n) +{ + if (n->task == current || + cpu_rq_is_locked(smp_processor_id())) { + n->delayed = true; + return; + } + + hlist_del_init(&n->link); + if (n->halted) + smp_send_reschedule(n->cpu); + else + swake_up_one(&n->wq); +} + +void kvm_async_pf_delayed_wake(void) +{ + struct kvm_task_sleep_head *b; + struct kvm_task_sleep_node *n; + struct hlist_node *p, *next; + unsigned int reason; + unsigned long flags; + + if (!kvm_async_pf_read_enabled()) + return; + + /* + * We're running in the edging context, we need to complete + * the work as quick as possible. So we have a preliminary + * check without holding the lock. + */ + b = this_cpu_ptr(&apf_head); + if (hlist_empty(&b->list)) + return; + + /* + * Set the async page fault reason to something to avoid + * receiving the signals, which might cause lock contention + * and possibly dead-lock. As we're in guest context, it's + * safe to set the reason here. + * + * There might be pending signals. For that case, we needn't + * do anything. Otherwise, the pending signal will be lost. + */ + reason = kvm_async_pf_read_reason(); + if (!reason) { + kvm_async_pf_write_reason(KVM_PV_REASON_PAGE_NOT_PRESENT + + KVM_PV_REASON_PAGE_READY); + } + + if (!kvm_async_pf_trylock(b, flags)) + goto done; + + hlist_for_each_safe(p, next, &b->list) { + n = hlist_entry(p, typeof(*n), link); + if (n->cpu != smp_processor_id()) + continue; + if (!n->delayed) + continue; + + kvm_async_pf_wake_one(n); + } + + kvm_async_pf_unlock(b, flags); + +done: + if (!reason) + kvm_async_pf_write_reason(0); +} +NOKPROBE_SYMBOL(kvm_async_pf_delayed_wake); + +static void kvm_async_pf_wake_all(void) +{ + struct kvm_task_sleep_head *b; + struct kvm_task_sleep_node *n; + struct hlist_node *p, *next; + unsigned long flags; + + b = this_cpu_ptr(&apf_head); + kvm_async_pf_lock(b, flags); + + hlist_for_each_safe(p, next, &b->list) { + n = hlist_entry(p, typeof(*n), link); + kvm_async_pf_wake_one(n); + } + + kvm_async_pf_unlock(b, flags); + + kvm_async_pf_write_reason(0); +} + +static void kvm_async_pf_wake(u32 token) +{ + struct kvm_task_sleep_head *b = this_cpu_ptr(&apf_head); + struct kvm_task_sleep_node *n; + unsigned long flags; + + if (token == ~0) { + kvm_async_pf_wake_all(); + return; + } + +again: + kvm_async_pf_lock(b, flags); + + n = kvm_async_pf_find(b, token); + if (!n) { + /* + * Async PF was not yet handled. Add dummy entry + * for the token. Busy wait until other CPU handles + * the async PF on allocation failure. + */ + n = kzalloc(sizeof(*n), GFP_ATOMIC); + if (!n) { + kvm_async_pf_unlock(b, flags); + cpu_relax(); + goto again; + } + n->token = token; + n->task = current; + n->cpu = smp_processor_id(); + n->halted = false; + n->delayed = false; + init_swait_queue_head(&n->wq); + hlist_add_head(&n->link, &b->list); + } else { + kvm_async_pf_wake_one(n); + } + + kvm_async_pf_unlock_and_clear(b, flags); +} + +static bool do_async_pf(unsigned long addr, unsigned int esr, + struct pt_regs *regs) +{ + u32 reason; + + if (!kvm_async_pf_read_enabled()) + return false; + + reason = kvm_async_pf_read_reason(); + if (!reason) + return false; + + switch (reason) { + case KVM_PV_REASON_PAGE_NOT_PRESENT: + kvm_async_pf_wait((u32)addr, !user_mode(regs)); + break; + case KVM_PV_REASON_PAGE_READY: + kvm_async_pf_wake((u32)addr); + break; + default: + if (reason) { + pr_warn("%s: Illegal reason %d\n", __func__, reason); + kvm_async_pf_write_reason(0); + } + } + + return true; +} +#endif /* CONFIG_KVM_GUEST */ + void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf = esr_to_fault_info(esr); +#ifdef CONFIG_KVM_GUEST + if (do_async_pf(addr, esr, regs)) + return; +#endif + if (!inf->fn(addr, esr, regs)) return; @@ -878,3 +1174,141 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, debug_exception_exit(regs); } NOKPROBE_SYMBOL(do_debug_exception); + +#ifdef CONFIG_KVM_GUEST +static int __init kvm_async_pf_available(char *arg) +{ + async_pf_available = false; + return 0; +} +early_param("no-kvmapf", kvm_async_pf_available); + +static void kvm_async_pf_enable(bool enable) +{ + struct arm_smccc_res res; + unsigned long *offsets = (unsigned long *)__exception_handlers_offset; + u32 enabled = kvm_async_pf_read_enabled(); + u64 val; + int i; + + if (enable == enabled) + return; + + if (enable) { + /* + * Asychonous page faults will be prohibited when CPU runs + * instructions between the vector base and the maximal + * offset, plus 4096. The 4096 is the assumped maximal + * length for individual handler. The hardware registers + * should be saved to stack at the beginning of the handlers, + * so 4096 shuld be safe enough. + */ + val = 0; + for (i = 0; i < 16; i++) { + if (offsets[i] > val) + val = offsets[i]; + } + + val += 4096; + val |= BIT(62); + + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_APF_FUNC_ID, + (u32)val, (u32)(val >> 32), &res); + if (res.a0 != SMCCC_RET_SUCCESS) { + pr_warn("Async PF configuration error %ld on CPU %d\n", + res.a0, smp_processor_id()); + return; + } + + /* FIXME: Enable KVM_ASYNC_PF_SEND_ALWAYS */ + val = BIT(63); + val |= virt_to_phys(this_cpu_ptr(&apf_data)); + val |= KVM_ASYNC_PF_ENABLED; + + kvm_async_pf_write_enabled(1); + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_APF_FUNC_ID, + (u32)val, (u32)(val >> 32), &res); + if (res.a0 != SMCCC_RET_SUCCESS) { + pr_warn("Async PF enable error %ld on CPU %d\n", + res.a0, smp_processor_id()); + kvm_async_pf_write_enabled(0); + return; + } + } else { + val = BIT(63); + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_APF_FUNC_ID, + (u32)val, (u32)(val >> 32), &res); + if (res.a0 != SMCCC_RET_SUCCESS) { + pr_warn("Async PF disable error %ld on CPU %d\n", + res.a0, smp_processor_id()); + return; + } + + kvm_async_pf_write_enabled(0); + } + + pr_info("Async PF %s on CPU %d\n", + enable ? "enabled" : "disabled", smp_processor_id()); +} + +static void kvm_async_pf_cpu_reboot(void *unused) +{ + kvm_async_pf_enable(false); +} + +static int kvm_async_pf_cpu_reboot_notify(struct notifier_block *nb, + unsigned long code, void *unused) +{ + if (code == SYS_RESTART) + on_each_cpu(kvm_async_pf_cpu_reboot, NULL, 1); + + return NOTIFY_DONE; +} + +static struct notifier_block kvm_async_pf_cpu_reboot_nb = { + .notifier_call = kvm_async_pf_cpu_reboot_notify, +}; + +static int kvm_async_pf_cpu_online(unsigned int cpu) +{ + struct kvm_task_sleep_head *b; + + b = this_cpu_ptr(&apf_head); + raw_spin_lock_init(&b->lock); + kvm_async_pf_enable(true); + return 0; +} + +static int kvm_async_pf_cpu_offline(unsigned int cpu) +{ + kvm_async_pf_enable(false); + return 0; +} + +static int __init kvm_async_pf_cpu_init(void) +{ + struct kvm_task_sleep_head *b; + int ret; + + if (!kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) || + !async_pf_available) + return -EPERM; + + register_reboot_notifier(&kvm_async_pf_cpu_reboot_nb); + ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, + "arm/kvm:online", kvm_async_pf_cpu_online, + kvm_async_pf_cpu_offline); + if (ret < 0) { + pr_warn("%s: Error %d to install cpu hotplug callbacks\n", + __func__, ret); + return ret; + } + + b = this_cpu_ptr(&apf_head); + raw_spin_lock_init(&b->lock); + kvm_async_pf_enable(true); + + return 0; +} +early_initcall(kvm_async_pf_cpu_init); +#endif /* CONFIG_KVM_GUEST */