From patchwork Tue Jun 8 14:11:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05733C47082 for ; Tue, 8 Jun 2021 14:12:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E43F661359 for ; Tue, 8 Jun 2021 14:12:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233203AbhFHOOj (ORCPT ); Tue, 8 Jun 2021 10:14:39 -0400 Received: from mail-wm1-f73.google.com ([209.85.128.73]:55983 "EHLO mail-wm1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232341AbhFHOOi (ORCPT ); Tue, 8 Jun 2021 10:14:38 -0400 Received: by mail-wm1-f73.google.com with SMTP id k8-20020a05600c1c88b02901b7134fb829so118757wms.5 for ; Tue, 08 Jun 2021 07:12:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KfCyzLjYvtfjVAu2B5vrfApkB1qN2fyodQIgINGEiTQ=; b=aCRbqylSf2xclCPrzcK21cGpJ42WwdS2xxkiUcDI5rJGyVysXv0RyuECMvmGBOIo9f 4e843nuXCj6TaO82nhrhkd3ABl3GWsCAeb9R5a0MPfmVnEfdjCuDRVTgTGTBvuxz58y+ OBSKt642xMNFQxyLqBD+Xahyo4jlFJvg2NHMba+pX8jccHI766dgiwA3XLmfkhfGJHzw T4BOf6Ga4pB+IR6V9vih/peEidZ//ysloqxfjGNabwUymu09f4ulvkr67KK54ff/f3l1 ra80fmWLjpL0lij2kaKuTQJ22e7QjGnXqd78UnIhflR3WVhdwMY/EMtSgUoHiJHhekeK 07YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KfCyzLjYvtfjVAu2B5vrfApkB1qN2fyodQIgINGEiTQ=; b=XR03msKE/DL+i0xKyE35pquX10l+0qvOrebMOm2rjmxEiLNTCTyviHV1Brt9b7h5Fc EhX5kzCr/Ls6+6Re1CMdKrc8xf5T04DX3FaVh6+Ei0rn/2fOBF6HNZjhARq0yIxLJ0hH kPw51qYyHxydmJTkOtZtLAW/yoIj2D81pWYGbYp1UxzEGHDAWVaMAKzb6t2FbD/VA+TJ 6kZGrNV79F1L/w3Fh3hyhpIlTOr7LGyDtAIcIhSNo+T5ngY8dN7zhaTnaQed5YEGPGbv WI+nAxFnSFv9XjlGo1Vx5IgltDNcP3o9rOgotnZTtp1ltDT8kknk+JeWHoHPD0J+LnXj X0bQ== X-Gm-Message-State: AOAM533+2TvQARi0Y1O9Xk4Z92lrbilp0pTDd+CcIwNzu2IaRqofxOz+ 3sDPdQtJXHx+xHHzvdodTx5RJX3/AQ== X-Google-Smtp-Source: ABdhPJyRDNGkYEpQKyC8K8EuRMO4oTBOQ99EavtyeC4T4OfyBXHvNt/BP9UvYfKkUFL4MTMUm3itodZJAw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:4ecf:: with SMTP id s15mr22738532wrv.80.1623161505308; Tue, 08 Jun 2021 07:11:45 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:29 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-2-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 01/13] KVM: arm64: Remove trailing whitespace in comments From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Editing this file later, and my editor always cleans up trailing whitespace. Removing it earler for clearer future patches. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1a7968ad078c..15c247fc9f0c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu, /* * We want to avoid world-switching all the DBG registers all the * time: - * + * * - If we've touched any debug register, it is likely that we're * going to touch more of them. It then makes sense to disable the * traps and start doing the save/restore dance * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is * then mandatory to save/restore the registers, as the guest * depends on them. - * + * * For this, we use a DIRTY bit, indicating the guest has modified the * debug registers, used as follow: * From patchwork Tue Jun 8 14:11:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D23C47082 for ; Tue, 8 Jun 2021 14:13:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7423560FE9 for ; Tue, 8 Jun 2021 14:13:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233266AbhFHOO5 (ORCPT ); Tue, 8 Jun 2021 10:14:57 -0400 Received: from mail-qv1-f74.google.com ([209.85.219.74]:43738 "EHLO mail-qv1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233120AbhFHOO5 (ORCPT ); Tue, 8 Jun 2021 10:14:57 -0400 Received: by mail-qv1-f74.google.com with SMTP id br4-20020ad446a40000b029021addf7b587so15580687qvb.10 for ; Tue, 08 Jun 2021 07:12:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lEQfWIvSuih1YUpi7yyWojQYEY8kugo/1cbYclEIxws=; b=uZjVhf+SaX5NhU2v+yADRB6IWcYIbpRYEopXUqQSA60vQ3K29QLZLD1tLPEmzp2rLT QS7kRl9HnYrcUA6EURzPaz3nS6EOeguVA/ymCgc83BhscaO9QAbHkT4T4DI54YU3nGx8 q1QeXdnPGyRg6zso6ZZv3Yfoa10+GM1m5ECmbuAxIRUcH/OB2cRQA86CNPX+qVm/xTcB bbtxa7b+aiImVeFKmfXPTmr1OggH4VgwhlJEr2TY19UIPJgdeY/jZvVKRsCV1BAsufMo AE0EDeNeEnRNZ4QRBqjuCB15JWp2pZ2oRLOulBogJul267MYZsgQrSueFQy9S0VtcYrr sLGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lEQfWIvSuih1YUpi7yyWojQYEY8kugo/1cbYclEIxws=; b=j5vWtyGGY8GcLfiQcxsp9OKwHH+ohqAHh6dpy5jxHsGi86OHS55iDPDN6NJjSjbtbO VsWbVgS9VRs5IEcPjX6dQtEpAeRiqHZYKzsiUQ75wqTdClOn0V4aZQev7MRrQcOGLLsn VdvJrBdVDa3rAhjJuYlTnqzJCVQWX8gkBlfECgPBj8vhdr+LlsfuDJ4uc4H7klfKu3fz MiHmgTgOEz2hkOZvcargNJBw9ZSN9r1Qty4vR1+PPgA/f+erPKW2FVTAW8ghbuRXNKZq 13XtpnCBtf1UlAi0tgHCSNrk9yHgE5WSub8XCG8heAkIkL2fgfkDarrG32UynsXNNZGl /YPw== X-Gm-Message-State: AOAM533dZkiGjHCzdAgJwdWveuyrtu9HC0mpxUG1iH/mo3DHqALmKdOv Gz7bo3C6wXOWe53G7UCkdNY7V8H2+Q== X-Google-Smtp-Source: ABdhPJwH4/F7EqZH/DqsA6koDO/Z9s4b3+g7fIjDicnVAVuUF8cTid6FIpOoM2uvjjVN/hYL6pFF/jeutg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:e902:: with SMTP id a2mr23770510qvo.39.1623161507305; Tue, 08 Jun 2021 07:11:47 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:30 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-3-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 02/13] KVM: arm64: MDCR_EL2 is a 64-bit register From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix the places in KVM that treat MDCR_EL2 as a 32-bit register. More recent features (e.g., FEAT_SPEv1p2) use bits above 31. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 20 ++++++++++---------- arch/arm64/include/asm/kvm_asm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/debug.c | 5 +++-- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 2 +- arch/arm64/kvm/hyp/vhe/debug-sr.c | 2 +- 6 files changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 692c9049befa..25d8a61888e4 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -280,18 +280,18 @@ /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) #define MDCR_EL2_E2TB_SHIFT (UL(24)) -#define MDCR_EL2_TTRF (1 << 19) -#define MDCR_EL2_TPMS (1 << 14) +#define MDCR_EL2_TTRF (UL(1) << 19) +#define MDCR_EL2_TPMS (UL(1) << 14) #define MDCR_EL2_E2PB_MASK (UL(0x3)) #define MDCR_EL2_E2PB_SHIFT (UL(12)) -#define MDCR_EL2_TDRA (1 << 11) -#define MDCR_EL2_TDOSA (1 << 10) -#define MDCR_EL2_TDA (1 << 9) -#define MDCR_EL2_TDE (1 << 8) -#define MDCR_EL2_HPME (1 << 7) -#define MDCR_EL2_TPM (1 << 6) -#define MDCR_EL2_TPMCR (1 << 5) -#define MDCR_EL2_HPMN_MASK (0x1F) +#define MDCR_EL2_TDRA (UL(1) << 11) +#define MDCR_EL2_TDOSA (UL(1) << 10) +#define MDCR_EL2_TDA (UL(1) << 9) +#define MDCR_EL2_TDE (UL(1) << 8) +#define MDCR_EL2_HPME (UL(1) << 7) +#define MDCR_EL2_TPM (UL(1) << 6) +#define MDCR_EL2_TPMCR (UL(1) << 5) +#define MDCR_EL2_HPMN_MASK (UL(0x1F)) /* For compatibility with fault code shared with 32-bit */ #define FSC_FAULT ESR_ELx_FSC_FAULT diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5e9b33cbac51..d88a5550552c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -209,7 +209,7 @@ extern u64 __vgic_v3_read_vmcr(void); extern void __vgic_v3_write_vmcr(u32 vmcr); extern void __vgic_v3_init_lrs(void); -extern u32 __kvm_get_mdcr_el2(void); +extern u64 __kvm_get_mdcr_el2(void); #define __KVM_EXTABLE(from, to) \ " .pushsection __kvm_ex_table, \"a\"\n" \ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5645af2a1431..45fdd0b7063f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -286,7 +286,7 @@ struct kvm_vcpu_arch { /* HYP configuration */ u64 hcr_el2; - u32 mdcr_el2; + u64 mdcr_el2; /* Exception Information */ struct kvm_vcpu_fault_info fault; diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index d5e79d7ee6e9..f7385bfbc9e4 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -21,7 +21,7 @@ DBG_MDSCR_KDE | \ DBG_MDSCR_MDE) -static DEFINE_PER_CPU(u32, mdcr_el2); +static DEFINE_PER_CPU(u64, mdcr_el2); /** * save/restore_guest_debug_regs @@ -154,7 +154,8 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) { - unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2; + unsigned long mdscr; + u64 orig_mdcr_el2 = vcpu->arch.mdcr_el2; trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug); diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..df361d839902 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu) __debug_switch_to_host_common(vcpu); } -u32 __kvm_get_mdcr_el2(void) +u64 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c index f1e2e5a00933..289689b2682d 100644 --- a/arch/arm64/kvm/hyp/vhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c @@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu) __debug_switch_to_host_common(vcpu); } -u32 __kvm_get_mdcr_el2(void) +u64 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } From patchwork Tue Jun 8 14:11:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 056B4C47082 for ; Tue, 8 Jun 2021 14:12:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E11416108E for ; Tue, 8 Jun 2021 14:12:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233213AbhFHOOo (ORCPT ); Tue, 8 Jun 2021 10:14:44 -0400 Received: from mail-qt1-f202.google.com ([209.85.160.202]:55295 "EHLO mail-qt1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233204AbhFHOOm (ORCPT ); Tue, 8 Jun 2021 10:14:42 -0400 Received: by mail-qt1-f202.google.com with SMTP id d12-20020ac8668c0000b0290246e35b30f8so3258063qtp.21 for ; Tue, 08 Jun 2021 07:12:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JXESWIRrg65Jst8XI7PkCsZKq0ZtOHhtcll5+7kJZvQ=; b=EZZZOs/wJaLpc4iV8fhIWCgKJcms1Zc85BNtnQggUPKXLstvhiF1Jeh4Z5mKQORV1x rFVJ1253JOz+Ny3hILPyxJN25otv0qA72R77AxNIg/sjMzYAn95LC9YrROnTMPnB9hJL 5w1ejcaPtilYK6h/+gIJNqCjIPpGrtK+wn1lEvXmeLuxuc0ZTXgQICCE9UdsRrPtEtap u0tTuPG5/p/s5OYny9dy7K5wOHk7JRHis7NI28wlQNOg37qO8VQhe8xTtU2PYPK0oQyF RRdHjhmCR1EpJ5PkNy57QB3dBgQVQzyUdGqtHQDFUy/huC8jMWOlrPjcuFbQBqn+PDIN FJHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JXESWIRrg65Jst8XI7PkCsZKq0ZtOHhtcll5+7kJZvQ=; b=f5Xte7hxlFrHzyqq/neNgpldkG+zN4aaeZYnrn8sdfDm6zh6uoRWJbpD9/pJuEh2wd 3wPzJg7qq1JKRBszj7BuCeW5WbH3GSsb36DPZJFTTY+jOpeWoxbc/Er/7hv8wlMaXugC 9uk7a4VtMrA+/jCy4RLPQoQFhRu8CXAmn/TSCG+sN4vIEH+jaKZ5Az2KchJ6BEIOewWz xSJMW94Xy0cGAJZUKPEcWGMsvEcvrm2kp6sLMj/ruoqND68l+oIhH1BzbgtKKvfyuA1P OCnDmhdGXbhnZry1QdI39q/Q9f0VVvR0TEhGi5/11n46SLhRNM+4I+0tenAGH6YMnPMA IAfw== X-Gm-Message-State: AOAM531rnsHyaWlnYddCI6aC0ak0OgnGo0qtm1S+AtN8A3eurlosMh95 YfZcaJSSnDtOEo0QGDpoYSmPcfufRA== X-Google-Smtp-Source: ABdhPJzFZhBxwNe7fGP0rKm32SNqUOu/zZzV1SJv34ne7YfRjjA0JDJ8WQ4OdDzl/uUfpzc+v515ENBpag== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:f982:: with SMTP id t2mr327420qvn.28.1623161509479; Tue, 08 Jun 2021 07:11:49 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:31 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-4-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 03/13] KVM: arm64: Fix name of HCR_TACR to match the spec From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Makes it easier to grep and to cross-check with the Arm Architecture Reference Manual. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 25d8a61888e4..d140e3c4c34f 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -33,7 +33,7 @@ #define HCR_TPU (UL(1) << 24) #define HCR_TPC (UL(1) << 23) #define HCR_TSW (UL(1) << 22) -#define HCR_TAC (UL(1) << 21) +#define HCR_TACR (UL(1) << 21) #define HCR_TIDCP (UL(1) << 20) #define HCR_TSC (UL(1) << 19) #define HCR_TID3 (UL(1) << 18) @@ -60,7 +60,7 @@ * The bits we set in HCR: * TLOR: Trap LORegion register accesses * RW: 64bit by default, can be overridden for 32bit VMs - * TAC: Trap ACTLR + * TACR: Trap ACTLR * TSC: Trap SMC * TSW: Trap cache operations by set/way * TWE: Trap WFE @@ -75,7 +75,7 @@ * PTW: Take a stage2 fault if a stage1 walk steps in device memory */ #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \ - HCR_BSU_IS | HCR_FB | HCR_TAC | \ + HCR_BSU_IS | HCR_FB | HCR_TACR | \ HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \ HCR_FMO | HCR_IMO | HCR_PTW ) #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF) From patchwork Tue Jun 8 14:11:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23174C4743E for ; Tue, 8 Jun 2021 14:12:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 089826108E for ; Tue, 8 Jun 2021 14:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233218AbhFHOOq (ORCPT ); Tue, 8 Jun 2021 10:14:46 -0400 Received: from mail-wm1-f73.google.com ([209.85.128.73]:63402 "EHLO mail-wm1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhFHOOp (ORCPT ); Tue, 8 Jun 2021 10:14:45 -0400 Received: by mail-wm1-f73.google.com with SMTP id u17-20020a05600c19d1b02901af4c4deac5so798038wmq.7 for ; Tue, 08 Jun 2021 07:12:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mYJp8oqaFP1tw/e1e8xhgZdQDpTaxWSTQf7vhWvZlRk=; b=JII0PeJUu8PAh/PyVJBslJQKxdeEv1a+WOZi7UZ9spEQuZXhNbhGb+3po/ZODzF8hD CYK1B4e7mV4xS9O6WCFE3id/YbscMVHVHJ6pWgUB3XBEEqFGuX2o4j2uzgNEUHCLtVFs wQaSLZNq+DjZOChuv01kIqzkU1cDmtw7v6UXtlC11vpaJi8sRjFJao1K7jPFHf2qo1TN mLeeV/4tmW9eJjF9zWy8lUT+JzTnZq3ObKDfeaMg4zruL25ViM3/qAZYy5Yi3B3WGarU sgZ66C4nfr2AvIBmSO6zueSe0eeub9mCt34RD9Ysh3lCHwJK1K8unhMi9z/Y7hx9HNQA u6Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mYJp8oqaFP1tw/e1e8xhgZdQDpTaxWSTQf7vhWvZlRk=; b=hPeIBTAelzHoklf7AXCLmPHz7tLool1PTeITuktYrXQ1+lpRUzLjZq4TgMcoWH/ynM wrbS06rE2VxWPfypQqd7CHrcVz+0VUiVpuDMG2oKemOYDIlyG4ys5+hAqPITlem4cI5d DvJvaQp5we6C/fZ6BW/N6t0jfVkB/aPKR6quEbY6uwBV4tN6fAygGrrrGxJqVEAeCPpl 94r8/ySrVjIcRNFf4jZcD7N1WqLUxwMCZOtRBMcsqUG5BEg1VJOhqG8K9fmX19aK2tQ9 1dZCoWMtjD9dXjt8gnuREF0YcnVe+kiMZtl84QaBbL1cBwOU+miMFMLhT31MeRI0Vita hEeA== X-Gm-Message-State: AOAM530wvyfvMB6Iw2peB+Mdjxvr0TpytEfESf1T1SWYIOLTE/E+rzka CPlsvEAh1N4LpxgH8Sw8XnOfH935KQ== X-Google-Smtp-Source: ABdhPJwlYYs9UJlk3Gi3h7/UJjzlQekZLO0A26eAJYu75ALrpkkTVW0FUgQSFVBzES8tKL3Ir9H9jvyzog== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:6042:: with SMTP id u63mr21861385wmb.112.1623161511477; Tue, 08 Jun 2021 07:11:51 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:32 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-5-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 04/13] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor sys_regs.h and sys_regs.c to make it easier to reuse common code. It will be used in nVHE in a later patch. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/sys_regs.c | 58 ++++++++++----------------------------- arch/arm64/kvm/sys_regs.h | 35 +++++++++++++++++++++++ 2 files changed, 50 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 15c247fc9f0c..73d09bbd173c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -44,10 +44,6 @@ * 64bit interface. */ -#define reg_to_encoding(x) \ - sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ - (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) - static bool read_from_write_only(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT)) - /* Read a sanitised cpufeature ID register by sys_reg_desc */ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz) @@ -1038,33 +1032,33 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, switch (id) { case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) - val &= ~FEATURE(ID_AA64PFR0_SVE); - val &= ~FEATURE(ID_AA64PFR0_AMU); - val &= ~FEATURE(ID_AA64PFR0_CSV2); - val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~FEATURE(ID_AA64PFR0_CSV3); - val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); + val &= ~SYS_FEATURE(ID_AA64PFR0_SVE); + val &= ~SYS_FEATURE(ID_AA64PFR0_AMU); + val &= ~SYS_FEATURE(ID_AA64PFR0_CSV2); + val |= FIELD_PREP(SYS_FEATURE(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); + val &= ~SYS_FEATURE(ID_AA64PFR0_CSV3); + val |= FIELD_PREP(SYS_FEATURE(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); break; case SYS_ID_AA64PFR1_EL1: - val &= ~FEATURE(ID_AA64PFR1_MTE); + val &= ~SYS_FEATURE(ID_AA64PFR1_MTE); break; case SYS_ID_AA64ISAR1_EL1: if (!vcpu_has_ptrauth(vcpu)) - val &= ~(FEATURE(ID_AA64ISAR1_APA) | - FEATURE(ID_AA64ISAR1_API) | - FEATURE(ID_AA64ISAR1_GPA) | - FEATURE(ID_AA64ISAR1_GPI)); + val &= ~(SYS_FEATURE(ID_AA64ISAR1_APA) | + SYS_FEATURE(ID_AA64ISAR1_API) | + SYS_FEATURE(ID_AA64ISAR1_GPA) | + SYS_FEATURE(ID_AA64ISAR1_GPI)); break; case SYS_ID_AA64DFR0_EL1: /* Limit debug to ARMv8.0 */ - val &= ~FEATURE(ID_AA64DFR0_DEBUGVER); - val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 6); + val &= ~SYS_FEATURE(ID_AA64DFR0_DEBUGVER); + val |= FIELD_PREP(SYS_FEATURE(ID_AA64DFR0_DEBUGVER), 6); /* Limit guests to PMUv3 for ARMv8.4 */ val = cpuid_feature_cap_perfmon_field(val, ID_AA64DFR0_PMUVER_SHIFT, kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0); /* Hide SPE from guests */ - val &= ~FEATURE(ID_AA64DFR0_PMSVER); + val &= ~SYS_FEATURE(ID_AA64DFR0_PMSVER); break; case SYS_ID_DFR0_EL1: /* Limit guests to PMUv3 for ARMv8.4 */ @@ -2082,23 +2076,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, return 0; } -static int match_sys_reg(const void *key, const void *elt) -{ - const unsigned long pval = (unsigned long)key; - const struct sys_reg_desc *r = elt; - - return pval - reg_to_encoding(r); -} - -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params, - const struct sys_reg_desc table[], - unsigned int num) -{ - unsigned long pval = reg_to_encoding(params); - - return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); -} - int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) { kvm_inject_undefined(vcpu); @@ -2341,13 +2318,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) trace_kvm_handle_sys_reg(esr); - params.Op0 = (esr >> 20) & 3; - params.Op1 = (esr >> 14) & 0x7; - params.CRn = (esr >> 10) & 0xf; - params.CRm = (esr >> 1) & 0xf; - params.Op2 = (esr >> 17) & 0x7; + params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); - params.is_write = !(esr & 1); ret = emulate_sys_reg(vcpu, ¶ms); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 9d0621417c2a..f7cde4436f32 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -11,6 +11,12 @@ #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__ #define __ARM64_KVM_SYS_REGS_LOCAL_H__ +#include + +#define reg_to_encoding(x) \ + sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ + (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) + struct sys_reg_params { u8 Op0; u8 Op1; @@ -21,6 +27,14 @@ struct sys_reg_params { bool is_write; }; +#define esr_sys64_to_params(esr) \ + ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3, \ + .Op1 = ((esr) >> 14) & 0x7, \ + .CRn = ((esr) >> 10) & 0xf, \ + .CRm = ((esr) >> 1) & 0xf, \ + .Op2 = ((esr) >> 17) & 0x7, \ + .is_write = !((esr)&1) }) + struct sys_reg_desc { /* Sysreg string for debug */ const char *name; @@ -152,6 +166,24 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1, return i1->Op2 - i2->Op2; } +static inline int match_sys_reg(const void *key, const void *elt) +{ + const unsigned long pval = (unsigned long)key; + const struct sys_reg_desc *r = elt; + + return pval - reg_to_encoding(r); +} + +static inline const struct sys_reg_desc * +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], + unsigned int num) +{ + unsigned long pval = reg_to_encoding(params); + + return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), + match_sys_reg); +} + const struct sys_reg_desc *find_reg_by_id(u64 id, struct sys_reg_params *params, const struct sys_reg_desc table[], @@ -170,4 +202,7 @@ const struct sys_reg_desc *find_reg_by_id(u64 id, CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \ Op2(sys_reg_Op2(reg)) +/* Extract the feature specified from the feature id register. */ +#define SYS_FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT)) + #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */ From patchwork Tue Jun 8 14:11:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 066F2C47082 for ; Tue, 8 Jun 2021 14:12:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4E8D6108E for ; Tue, 8 Jun 2021 14:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233256AbhFHOOr (ORCPT ); Tue, 8 Jun 2021 10:14:47 -0400 Received: from mail-qv1-f73.google.com ([209.85.219.73]:43741 "EHLO mail-qv1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhFHOOq (ORCPT ); Tue, 8 Jun 2021 10:14:46 -0400 Received: by mail-qv1-f73.google.com with SMTP id br4-20020ad446a40000b029021addf7b587so15580887qvb.10 for ; Tue, 08 Jun 2021 07:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OBCO8s7Q80TfIlYuHTY4iy7nv5Rn/TiuCTrWSGUfF4U=; b=MoN4dps2ZTiPTKGBh1DVK9lwFJTRIyDyKRLCpLXV0gGTuXNVM5oXgyybTxV78acvH9 fmFH2/TjUq75AWlp90FL2RzjzX9DfTOaGv2hP4i8uDJkHZYYh77M36HhqKThDgAQZ2no BUvyK8pOBzafiWSLbVFaEOU7hgqrIti+8IY4k2O2OYD//GtCIV1N7lAPk3t7vxb4of3i /YgIbP82T1qX3gK2BySKmqvaFjzvirQXKrTxDrA1lmvzTL75VTaw/2y8GmIwIeYU65wZ /mv0qdGNjRaNHYaY3OaHbtt0bsiGXJSLGnhG31ewUUN5eppwZQZlHWfTCT7rHFB+vGY1 Y3Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OBCO8s7Q80TfIlYuHTY4iy7nv5Rn/TiuCTrWSGUfF4U=; b=I090PlMsx9z5xurOLmMu8xf8iIcSb6NtAQJHd+3Kyd87DdbMNb6ZRzCAF+R55ENm+a QHJv6ySzR4fgzeLL/OrWtlR9oGyqoRLnIONL3KVgRh2crgrZ5oz2TXGl4k0CW81IYeEo VVc/riIZr+GAKp/TrrUIl0vOfhl2Zt8FAZtM558U+MaYn3mFv+fCycJnXALT5YgcOYAz niyoga4AAhe5AqzznYPZWfl9ecb3gL3FCz5pyKFPMtd3rHW8XOwJmzYGWb/4ivmznZ/T gkbYkKTG5bO3sMjiLWdyUykNCcTwdDZ1QuvN0fAOP09ItNX2+OTtilNvxLmTeZngXgqX 8n6g== X-Gm-Message-State: AOAM533qVCdc12sIcYU5n6BwbG18P0xa/W0DsiOXAQAC800bomIxr6Ws N6DdZsINwhAvft5eKjgpZI0N7pPPeQ== X-Google-Smtp-Source: ABdhPJxpGX73Z52hJxsj2Po6OlxesU18A3epNjEVGsZ9aSHXe5Ja52uB1RBv+GOJBxo8uI4KoiZL7yHnUA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:fa4a:: with SMTP id k10mr203538qvo.18.1623161513492; Tue, 08 Jun 2021 07:11:53 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:33 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-6-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 05/13] KVM: arm64: Restore mdcr_el2 from vcpu From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On deactivating traps, restore the value of mdcr_el2 from the vcpu context, rather than directly reading the hardware register. Currently, the two values are the same, i.e., the hardware register and the vcpu one. A future patch will be changing the value of mdcr_el2 on activating traps, and this ensures that its value will be restored. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..430b5bae8761 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -73,7 +73,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) ___deactivate_traps(vcpu); - mdcr_el2 = read_sysreg(mdcr_el2); + mdcr_el2 = vcpu->arch.mdcr_el2; if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { u64 val; From patchwork Tue Jun 8 14:11:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF3A2C47082 for ; Tue, 8 Jun 2021 14:12:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBB5B6108E for ; Tue, 8 Jun 2021 14:12:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233254AbhFHOOu (ORCPT ); Tue, 8 Jun 2021 10:14:50 -0400 Received: from mail-wr1-f74.google.com ([209.85.221.74]:56955 "EHLO mail-wr1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233216AbhFHOOt (ORCPT ); Tue, 8 Jun 2021 10:14:49 -0400 Received: by mail-wr1-f74.google.com with SMTP id s8-20020adff8080000b0290114e1eeb8c6so9489865wrp.23 for ; Tue, 08 Jun 2021 07:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4Asy9TeJS64pEvGY8XEAbuolwNNbKZBoX69L3dIU+1w=; b=pqkh1UBwITENdBKvU99xgPAZ6Mx6QI3ILsXjKO6Uq7zCucOnSy+qzrIEiZPaZXQEKa cAU2XhxYOX81hN680aGEcxhGPa1HlZOb7BQ+XW7M5qWn2G8jCmt7f7Xhlx8hycAS8+Rw vFAIYth4ZXHQFJmTS+gHrhPYguUOT2hVxzyRQTGNbeXaz3ieVgobfOmpUhOJp7ELzepa +pk4HABsT0zjVO3z+nkudCEme5XhkRghb8hwDJwVvq9sMZYElPUshgVbSXE0tiLK+r+c gh4rH7nLfJ8NSSWcLxvyvRfmtTCXT4i9h8kdD6apYO/zYRHfRcIFZVHBJDOZXw7qobdR JveQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4Asy9TeJS64pEvGY8XEAbuolwNNbKZBoX69L3dIU+1w=; b=U+RS6ThrfOuaQ4YT489Mi/9j+ZgOUx2nivcB9bPZpKTMeWN6lO4DrRlBr6geXjRQ3y GRIg3pfKxh4CQHe7xeLNZdpxcUZAekQjN2SQg90eh1z5KgVrSDrAkDPRd7RLcBpE7jVZ I+HHAYkQnErN6W7iHM5OKFla/w1Anv13t5BDII3zBW1RHbPKxOXefQZ1NvNyw3AzLSgh elUy5GE/sSrs0t5V7mJWLHbiCwzGe6a+Gi9HeCtoxhR4nswxPqR2vt0kbjOqZ0ck2x6i NmUl7QgbvWaKwvN8HpZ/50BlNAG5prNpFC7lTDkbxq0E4BciC6JsnWIryZSecXAl9U+h ekYQ== X-Gm-Message-State: AOAM5310yRAmqWRJfTW7RlrHBlS0mmXo0tg3YPg0orGyAiTnZa3Si3ab gbulCufhXNxlz79E3BAHwhesiNmIeQ== X-Google-Smtp-Source: ABdhPJzS+sELn7BYfrB7ptQIb0s7ygi8FymYY+xXnjkAYPENkhE5guGgsqKszr4QNxFmh8mo96FSX0OKWA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:f70d:: with SMTP id v13mr21664188wmh.183.1623161515598; Tue, 08 Jun 2021 07:11:55 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:34 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-7-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 06/13] KVM: arm64: Add feature register flag definitions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add feature register flag definitions to clarify which features might be toggled. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/sysreg.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 65d15700a168..52e48b9226f6 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -789,6 +789,10 @@ #define ID_AA64PFR0_FP_SUPPORTED 0x0 #define ID_AA64PFR0_ASIMD_NI 0xf #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 +#define ID_AA64PFR0_EL3_64BIT_ONLY 0x1 +#define ID_AA64PFR0_EL3_32BIT_64BIT 0x2 +#define ID_AA64PFR0_EL2_64BIT_ONLY 0x1 +#define ID_AA64PFR0_EL2_32BIT_64BIT 0x2 #define ID_AA64PFR0_EL1_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL1_32BIT_64BIT 0x2 #define ID_AA64PFR0_EL0_64BIT_ONLY 0x1 @@ -854,6 +858,7 @@ #define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0 #define ID_AA64MMFR0_TGRAN16_NI 0x0 #define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1 +#define ID_AA64MMFR0_PARANGE_40 0x2 #define ID_AA64MMFR0_PARANGE_48 0x5 #define ID_AA64MMFR0_PARANGE_52 0x6 @@ -901,6 +906,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0 /* id_aa64dfr0 */ +#define ID_AA64DFR0_MTPMU_SHIFT 48 #define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 From patchwork Tue Jun 8 14:11:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E637C4743E for ; Tue, 8 Jun 2021 14:12:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3362561078 for ; Tue, 8 Jun 2021 14:12:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233178AbhFHOOG (ORCPT ); Tue, 8 Jun 2021 10:14:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233171AbhFHOOG (ORCPT ); Tue, 8 Jun 2021 10:14:06 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F91AC061574 for ; Tue, 8 Jun 2021 07:11:58 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id k16-20020ae9f1100000b02903aa0311ef7bso12070282qkg.0 for ; Tue, 08 Jun 2021 07:11:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PeB+UBqReCGwpCjLlz829AwM5AYc1PkEdvHShsb8BsY=; b=thWPfhDKbF5s/ofjUoC618+2dp0pJ6SpArtej3SQV13eC53nY4my1hjo7wNFpjVTmK yGHlTgow8y7JhOomXXHMNqFfSQSCw8cIokbKDoG5o8uO/iw7Qz6I7L8zrZxSbtXCzw1E kH/kMWS4ZwSMu1APA9ipWbzroPMlnqFDdilyUsvKU7we20dnh2MdNxQj/ueOjOzZIlce dvugoeETqCejcNqjuKVcyI7KqCyZqEpCBTDKP43wzWg0yoW2mQ6+Czg0ver96xkdKBTr aKVDAXeEKI+h1lKlx25/Q9MvHfUl61FiuWFkvdUtsjsPYMWPEP7y4r5aT8G48XkWtFRf d30Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PeB+UBqReCGwpCjLlz829AwM5AYc1PkEdvHShsb8BsY=; b=dDiu+6F+UpSCxP4rzoKrG81AvQ8O9X2+pbBHHUjal8dXE1w1YvFuy5PUJyOTRDN8nt Ddm9cKT051jvUbUdHzA/L2Z6BFo0Xko6I2Gc7WLhrpuiVSHyNpiaMrDJBK5G3KVDSLFF DCG5XT2JsdLGGE60KVvj2VriMsXMXSiLx/C8a5j3i0shv652lap2oAm1VzL2UnXKTMa6 h/ljPJBcgjX2SwWGlG/3ntCTWGQLZoES8bALu1wOE01HrOhvMzAQMmAXKVrFIdMog+jq DU1H0M0RtLQsO1Hg/2OE78lhwGHkEt+qvTYMRSrIFRjP2v6w5zxIheM/JuDl7Bc/bR+r V0mA== X-Gm-Message-State: AOAM533svJyExhY0z4o4dLq1Cifi7FVDO+DSockwAXpJGxb6lGZ5uWGW 3srA2PMef0dMeR3MIZrtugs8rnlwDQ== X-Google-Smtp-Source: ABdhPJxb527Ok9dzKMDfpH0rL290/po/5ZdFVHy/rNH6MII/RtyWRGkp6JE/bCw5MFqPLrEii301dtlmdw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:5561:: with SMTP id w1mr23725500qvy.47.1623161517455; Tue, 08 Jun 2021 07:11:57 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:35 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-8-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 07/13] KVM: arm64: Add config register bit definitions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add hardware configuration register bit definitions for HCR_EL2 and MDCR_EL2. Future patches toggle these hyp configuration register bits to trap on certain accesses. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index d140e3c4c34f..5bb26be69c3f 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -12,7 +12,11 @@ #include /* Hyp Configuration Register (HCR) bits */ +#define HCR_TID5 (UL(1) << 58) +#define HCR_DCT (UL(1) << 57) #define HCR_ATA (UL(1) << 56) +#define HCR_AMVOFFEN (UL(1) << 51) +#define HCR_FIEN (UL(1) << 47) #define HCR_FWB (UL(1) << 46) #define HCR_API (UL(1) << 41) #define HCR_APK (UL(1) << 40) @@ -280,7 +284,11 @@ /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) #define MDCR_EL2_E2TB_SHIFT (UL(24)) +#define MDCR_EL2_MTPME (UL(1) << 28) +#define MDCR_EL2_TDCC (UL(1) << 27) +#define MDCR_EL2_HCCD (UL(1) << 23) #define MDCR_EL2_TTRF (UL(1) << 19) +#define MDCR_EL2_HPMD (UL(1) << 17) #define MDCR_EL2_TPMS (UL(1) << 14) #define MDCR_EL2_E2PB_MASK (UL(0x3)) #define MDCR_EL2_E2PB_SHIFT (UL(12)) From patchwork Tue Jun 8 14:11:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C72FAC47082 for ; Tue, 8 Jun 2021 14:13:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B1CEE60FE9 for ; Tue, 8 Jun 2021 14:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233265AbhFHOOx (ORCPT ); Tue, 8 Jun 2021 10:14:53 -0400 Received: from mail-qt1-f201.google.com ([209.85.160.201]:57142 "EHLO mail-qt1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233120AbhFHOOx (ORCPT ); Tue, 8 Jun 2021 10:14:53 -0400 Received: by mail-qt1-f201.google.com with SMTP id i24-20020ac876580000b02902458afcb6faso5242468qtr.23 for ; Tue, 08 Jun 2021 07:13:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B0iSDc0/tfWTjM0OiK/d0I79Av2Tsf7eBHAfhQ1/w18=; b=qZLlP031k4bAxuV3xFN0znqp9f45FgJRklrHgZ65e3zrJ4iLgv7UpIhS/BxGRmUlvZ W8YOje7/QZd/aKd482Qo26LuzScXPYsr6deeIMM5nHwol8M1CR/knY+mNYcAmqXxoA13 ByT9S00WgviUIMhMnjOxh99XER6Nt8uclt3xzms0NPIqTuk3e0DHhGJgDoFqlBjYrY9u vti0G+jeskHwqPptf9WhoHD4Eb2+OKxYQPCqW3jitebR1O4rm6eD7uy5QOdqilzklNZ5 9s5eej8SeYigZQSYp6NZbe/h7ijD0bqMoIqt0yA3V/bUWMhjMenHusy0MHbc3b8PL0up wjtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B0iSDc0/tfWTjM0OiK/d0I79Av2Tsf7eBHAfhQ1/w18=; b=BZ/52bOUY2QF0AQ/a7nc3OF1Ilp/UAYo7kj2mUZVUmJWPSBEvMse+UXnZNg1HZU/kX OdtKcuafiqD/UtopK+okc41tUPYtzWUIL69KjfP2dFINH1JoX3/KKTguZ6q+XcPWRdLf SqIcIMqdBNOxnhmsJ/DSsgN9r1SJqV4aGvDcNZUdUPzltfX3dtzbM4ZkejZkj+meP5Fj 3tNHfBWH6rVKCH/5JMUojxcQgWZhsJZIUViz9wmBSSycpMa3C94fHb02C+3kPi2X0xmR NXZCf/m0Wy0ghZdPWBnYTkKkEpTxaYXKeWTCDqQ+dITOrm400vq77fIsQDxc3tOapuLM TLVw== X-Gm-Message-State: AOAM532DlFef2fj9gZtZS6gWZLVwu6O5B4MnIkXl2dCIZddTbOgqJM75 z/Xj3JxvecvERgtL3hFY/9N1Pnx6vw== X-Google-Smtp-Source: ABdhPJy50ae7MBl1TZMdXHIJsYZ+eEpkC//wo2fuZnsGPUYbZ9gs5lWxYjfTVx35+m/lQNpYnvyKL0W1Iw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:258b:: with SMTP id fq11mr192225qvb.1.1623161519488; Tue, 08 Jun 2021 07:11:59 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:36 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-9-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 08/13] KVM: arm64: Guest exit handlers for nVHE hyp From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an array of pointers to handlers for various trap reasons in nVHE code. The current code selects how to fixup a guest on exit based on a series of if/else statements. Future patches will also require different handling for guest exists. Create an array of handlers to consolidate them. No functional change intended as the array isn't populated yet. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 35 +++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..f5d3d1da0aec 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -405,6 +405,18 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) return true; } +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu); + +static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu) +{ + if (is_nvhe_hyp_code()) + return kvm_get_nvhe_exit_handler(vcpu); + else + return NULL; +} + /* * Return true when we were able to fixup the guest exit and should return to * the guest, false when we should restore the host state and return to the @@ -412,6 +424,8 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) */ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + exit_handle_fn exit_handler; + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -492,6 +506,11 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) goto guest; } + /* Check if there's an exit handler and allow it to handle the exit. */ + exit_handler = kvm_get_hyp_exit_handler(vcpu); + if (exit_handler && exit_handler(vcpu)) + goto guest; + exit: /* Return to the host kernel and handle the exit */ return false; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 430b5bae8761..967a3ad74fbd 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -165,6 +165,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +static exit_handle_fn hyp_exit_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = NULL, + [ESR_ELx_EC_WFx] = NULL, + [ESR_ELx_EC_CP15_32] = NULL, + [ESR_ELx_EC_CP15_64] = NULL, + [ESR_ELx_EC_CP14_MR] = NULL, + [ESR_ELx_EC_CP14_LS] = NULL, + [ESR_ELx_EC_CP14_64] = NULL, + [ESR_ELx_EC_HVC32] = NULL, + [ESR_ELx_EC_SMC32] = NULL, + [ESR_ELx_EC_HVC64] = NULL, + [ESR_ELx_EC_SMC64] = NULL, + [ESR_ELx_EC_SYS64] = NULL, + [ESR_ELx_EC_SVE] = NULL, + [ESR_ELx_EC_IABT_LOW] = NULL, + [ESR_ELx_EC_DABT_LOW] = NULL, + [ESR_ELx_EC_SOFTSTP_LOW] = NULL, + [ESR_ELx_EC_WATCHPT_LOW] = NULL, + [ESR_ELx_EC_BREAKPT_LOW] = NULL, + [ESR_ELx_EC_BKPT32] = NULL, + [ESR_ELx_EC_BRK64] = NULL, + [ESR_ELx_EC_FP_ASIMD] = NULL, + [ESR_ELx_EC_PAC] = NULL, +}; + +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu) +{ + u32 esr = kvm_vcpu_get_esr(vcpu); + u8 esr_ec = ESR_ELx_EC(esr); + + return hyp_exit_handlers[esr_ec]; +} + /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { From patchwork Tue Jun 8 14:11:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6020C47082 for ; Tue, 8 Jun 2021 14:12:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 903726135D for ; Tue, 8 Jun 2021 14:12:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233158AbhFHONz (ORCPT ); Tue, 8 Jun 2021 10:13:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232341AbhFHONz (ORCPT ); Tue, 8 Jun 2021 10:13:55 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 846F8C061787 for ; Tue, 8 Jun 2021 07:12:02 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id q8-20020ad45ca80000b02902329fd23199so3495709qvh.7 for ; Tue, 08 Jun 2021 07:12:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6LpaHhhXDpuJgRjjUnOsgzEC9sfOrmSwgfSTkrOttsw=; b=GIO8FUa4smoJYCrslqmevtuLVUP6gpOyRh7IU8lD6Ek2nygMIZotzOGhAORWkh/Cj1 F0dincun5s1v6metdFPiKesJYwDbnlf8MQYTDBgvTAQRUl18Hc194t9qPThC1u61ltbO fG83EMPkAzbQ71/RyJppUmCpNihYXa2Lnd1ZW8LhRCVNzJ97Wc+zIEeAhG08BPHWWa8Z 4lllK6MTsGN4YHZX7WHrepQ8tTRWfl1zWMXCXG4hpZxjUH6MdhGGpkK2xXsEPD21EKbG CFC5txoY6mfRA6M7f5OsHjYYg7j3SqS2OZcTMnuEgA2DAgcFQXrquGnKce//kOisLbCi lkMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6LpaHhhXDpuJgRjjUnOsgzEC9sfOrmSwgfSTkrOttsw=; b=PL7vEBIzEGgUXBV8/iL+T0IlvwmQWHFqElIRK6Afszsk1YwnZxLMCgT7h5ZYZ6uQor 7gkoLx3YyYbBIjJX6l8CiTZouuWHmJ/ecR7L0HpOFfWue4bGiaeGmww0aSYQaB0YHcnD BSdvAwA8u4xKRykzhQpo+04SYScLhqy096xaNLPicSwK6JApdvOQ7Da54krrDUnzzB4X EALvh8hT/Mmcf0An0R2dSndkKMx4uzNqmMyKgph+uEwHeBrW0XFKN4Kw/DkANvla7t7s Xkpn/BGR+GI3HKIcmy84o7VBCRvF2NiZkYCZoURwvYEkwT2Qy8rY1lh5qBA97StBisSh 31pw== X-Gm-Message-State: AOAM531t0j6Kgt+Rb+/KB3idbB5IL87S6E6/FhHtDSXfLlgzMam22sIG i/3lwjScYEm61FIShYwjggfBmG9jHA== X-Google-Smtp-Source: ABdhPJw2ia6P/aSpSSG+OrbZNi8BHoG26Qv4QCU3TXzkzpitGwKHgrMivs9ODvFWz3Yd0kQBtIraA5J3GQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:aa13:: with SMTP id d19mr23638744qvb.3.1623161521645; Tue, 08 Jun 2021 07:12:01 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:37 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-10-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 09/13] KVM: arm64: Add trap handlers for protected VMs From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add trap handlers for protected VMs. These are mainly for Sys64 and debug traps. No functional change intended as these are not hooked in yet. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 4 + arch/arm64/kvm/arm.c | 4 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 496 +++++++++++++++++++++++++++++ 4 files changed, 505 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..23d4e5aac41d 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -115,7 +115,11 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); #endif +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val); +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val); +extern u64 kvm_nvhe_sym(id_aa64dfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d71da6089822..a56ff3a6d2c0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1751,8 +1751,12 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) void *addr = phys_to_virt(hyp_mem_base); int ret; + kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); + kvm_nvhe_sym(id_aa64dfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 5df6193fc430..a23f417a0c20 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o + cache.o setup.o mm.o mem_protect.o sys_regs.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c new file mode 100644 index 000000000000..890c96315e55 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -0,0 +1,496 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#include + +#include +#include +#include + +#include + +#include "../../sys_regs.h" + +u64 id_aa64pfr0_el1_sys_val; +u64 id_aa64pfr1_el1_sys_val; +u64 id_aa64dfr0_el1_sys_val; +u64 id_aa64mmfr2_el1_sys_val; + +/* + * Inject an undefined exception to the guest. + */ +static void inject_undef(struct kvm_vcpu *vcpu) +{ + u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); + + vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + KVM_ARM64_EXCEPT_AA64_ELx_SYNC | + KVM_ARM64_PENDING_EXCEPTION); + + __kvm_adjust_pc(vcpu); + + write_sysreg_el1(esr, SYS_ESR); + write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); +} + +/* + * Accessor for undefined accesses. + */ +static bool undef_access(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + inject_undef(vcpu); + return false; +} + +/* + * Accessors for feature registers. + * + * If access is allowed, set the regval to the protected VM's view of the + * register and return true. + * Otherwise, inject an undefined exception and return false. + */ + +/* Accessor for ID_AA64PFR0_EL1. */ +static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 set_mask; + u64 val = id_aa64pfr0_el1_sys_val; + const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm); + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - AArch32 state for protected VMs + * - GIC CPU Interface + * - ARMv8.4-RAS (restricted to v1) + * - Scalable Vectors + * - Memory Partitioning and Monitoring + * - Activity Monitoring + * - Secure EL2 (not relevant to non-secure guests) + */ + clear_mask = SYS_FEATURE(ID_AA64PFR0_EL0) | + SYS_FEATURE(ID_AA64PFR0_EL1) | + SYS_FEATURE(ID_AA64PFR0_EL2) | + SYS_FEATURE(ID_AA64PFR0_EL3) | + SYS_FEATURE(ID_AA64PFR0_GIC) | + SYS_FEATURE(ID_AA64PFR0_RAS) | + SYS_FEATURE(ID_AA64PFR0_SVE) | + SYS_FEATURE(ID_AA64PFR0_MPAM) | + SYS_FEATURE(ID_AA64PFR0_AMU) | + SYS_FEATURE(ID_AA64PFR0_SEL2) | + SYS_FEATURE(ID_AA64PFR0_CSV2) | + SYS_FEATURE(ID_AA64PFR0_CSV3); + + set_mask = (ID_AA64PFR0_EL0_64BIT_ONLY << ID_AA64PFR0_EL0_SHIFT) | + (ID_AA64PFR0_EL1_64BIT_ONLY << ID_AA64PFR0_EL1_SHIFT) | + (ID_AA64PFR0_EL2_64BIT_ONLY << ID_AA64PFR0_EL2_SHIFT); + + /* Only set EL3 handling if EL3 exists. */ + if (val & SYS_FEATURE(ID_AA64PFR0_EL3)) + set_mask |= + (ID_AA64PFR0_EL3_64BIT_ONLY << ID_AA64PFR0_EL3_SHIFT); + + /* RAS restricted to v1 (0x1). */ + if (val & SYS_FEATURE(ID_AA64PFR0_RAS)) + set_mask |= FIELD_PREP(SYS_FEATURE(ID_AA64PFR0_RAS), 1); + + /* Check whether Spectre and Meltdown are mitigated. */ + set_mask |= FIELD_PREP(SYS_FEATURE(ID_AA64PFR0_CSV2), + (u64)kvm->arch.pfr0_csv2); + set_mask |= FIELD_PREP(SYS_FEATURE(ID_AA64PFR0_CSV3), + (u64)kvm->arch.pfr0_csv3); + + p->regval = (val & ~clear_mask) | set_mask; + return true; +} + +/* Accessor for ID_AA64PFR1_EL1. */ +static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 val = id_aa64pfr1_el1_sys_val; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - ARMv8.4-RAS (restricted to v1) + * - Memory Partitioning and Monitoring + * - Memory Tagging + */ + clear_mask = SYS_FEATURE(ID_AA64PFR1_RASFRAC) | + SYS_FEATURE(ID_AA64PFR1_MPAMFRAC) | + SYS_FEATURE(ID_AA64PFR1_MTE); + + p->regval = val & ~clear_mask; + return true; +} + +/* Accessor for ID_AA64ZFR0_EL1. */ +static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return undef_access(vcpu, p, r); + + /* No support for Scalable Vectors */ + p->regval = 0; + return true; +} + +/* Accessor for ID_AA64DFR0_EL1. */ +static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 val = id_aa64dfr0_el1_sys_val; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - Debug: includes breakpoints, and watchpoints. + * Note: not supporting debug at all is not arch-compliant. + * - OS Double Lock + * - Trace and Self Hosted Trace + * - Performance Monitoring + * - Statistical Profiling + */ + clear_mask = SYS_FEATURE(ID_AA64DFR0_DEBUGVER) | + SYS_FEATURE(ID_AA64DFR0_BRPS) | + SYS_FEATURE(ID_AA64DFR0_WRPS) | + SYS_FEATURE(ID_AA64DFR0_CTX_CMPS) | + SYS_FEATURE(ID_AA64DFR0_DOUBLELOCK) | + SYS_FEATURE(ID_AA64DFR0_TRACEVER) | + SYS_FEATURE(ID_AA64DFR0_TRACE_FILT) | + SYS_FEATURE(ID_AA64DFR0_PMUVER) | + SYS_FEATURE(ID_AA64DFR0_MTPMU) | + SYS_FEATURE(ID_AA64DFR0_PMSVER); + + p->regval = val & ~clear_mask; + return true; +} + +/* Accessor for ID_AA64MMFR0_EL1. */ +static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 set_mask; + u64 val = id_aa64mmfr0_el1_sys_val; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - Nested Virtualization + * + * Only support for: + * - 4KB granule + * - 40-bit IPA + */ + clear_mask = SYS_FEATURE(ID_AA64MMFR0_ECV) | + SYS_FEATURE(ID_AA64MMFR0_FGT) | + SYS_FEATURE(ID_AA64MMFR0_TGRAN4_2) | + SYS_FEATURE(ID_AA64MMFR0_TGRAN64_2) | + SYS_FEATURE(ID_AA64MMFR0_TGRAN16_2) | + SYS_FEATURE(ID_AA64MMFR0_TGRAN4) | + SYS_FEATURE(ID_AA64MMFR0_TGRAN16) | + SYS_FEATURE(ID_AA64MMFR0_PARANGE); + + set_mask = SYS_FEATURE(ID_AA64MMFR0_TGRAN64) | + (ID_AA64MMFR0_PARANGE_40 << ID_AA64MMFR0_PARANGE_SHIFT); + + p->regval = (val & ~clear_mask) | set_mask; + return true; +} + +/* Accessor for ID_AA64MMFR1_EL1. */ +static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 val = id_aa64mmfr1_el1_sys_val; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - Nested Virtualization + * - Limited Ordering Regions + */ + clear_mask = SYS_FEATURE(ID_AA64MMFR1_TWED) | + SYS_FEATURE(ID_AA64MMFR1_XNX) | + SYS_FEATURE(ID_AA64MMFR1_VHE) | + SYS_FEATURE(ID_AA64MMFR1_LOR); + + p->regval = val & ~clear_mask; + return true; +} + +/* Accessor for ID_AA64MMFR2_EL1. */ +static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 clear_mask; + u64 val = id_aa64mmfr2_el1_sys_val; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for: + * - Nested Virtualization + * - Small translation tables + * - 64-bit format of CCSIDR_EL1 + * - 52-bit VAs + * - AArch32 state for protected VMs + */ + clear_mask = SYS_FEATURE(ID_AA64MMFR2_EVT) | + SYS_FEATURE(ID_AA64MMFR2_FWB) | + SYS_FEATURE(ID_AA64MMFR2_NV) | + SYS_FEATURE(ID_AA64MMFR2_ST) | + SYS_FEATURE(ID_AA64MMFR2_CCIDX) | + SYS_FEATURE(ID_AA64MMFR2_LVA) | + SYS_FEATURE(ID_AA64MMFR2_LSM); + + p->regval = val & ~clear_mask; + return true; +} + +/* + * Accessor for AArch32 Processor Feature Registers. + * + * The value of these registers is "unknown" according to the spec if AArch32 + * isn't supported. + */ +static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return undef_access(vcpu, p, r); + + /* Use 0 for architecturally "unknown" values. */ + p->regval = 0; + return true; +} + +/* Mark the specified system register as an AArch32 feature register. */ +#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 } + +/* Mark the specified system register as not being handled in hyp. */ +#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL } + +/* + * Architected system registers. + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 + * + * NOTE: Anything not explicitly listed here will be *restricted by default*, + * i.e., it will lead to injecting an exception into the guest. + */ +static const struct sys_reg_desc pvm_sys_reg_descs[] = { + /* Cache maintenance by set/way operations are restricted. */ + + /* Debug and Trace Registers are all restricted */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AARCH32(SYS_ID_PFR0_EL1), + AARCH32(SYS_ID_PFR1_EL1), + AARCH32(SYS_ID_DFR0_EL1), + AARCH32(SYS_ID_AFR0_EL1), + AARCH32(SYS_ID_MMFR0_EL1), + AARCH32(SYS_ID_MMFR1_EL1), + AARCH32(SYS_ID_MMFR2_EL1), + AARCH32(SYS_ID_MMFR3_EL1), + + /* CRm=2 */ + AARCH32(SYS_ID_ISAR0_EL1), + AARCH32(SYS_ID_ISAR1_EL1), + AARCH32(SYS_ID_ISAR2_EL1), + AARCH32(SYS_ID_ISAR3_EL1), + AARCH32(SYS_ID_ISAR4_EL1), + AARCH32(SYS_ID_ISAR5_EL1), + AARCH32(SYS_ID_MMFR4_EL1), + AARCH32(SYS_ID_ISAR6_EL1), + + /* CRm=3 */ + AARCH32(SYS_MVFR0_EL1), + AARCH32(SYS_MVFR1_EL1), + AARCH32(SYS_MVFR2_EL1), + AARCH32(SYS_ID_PFR2_EL1), + AARCH32(SYS_ID_DFR1_EL1), + AARCH32(SYS_ID_MMFR5_EL1), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 }, + { SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 }, + { SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 }, + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 }, + HOST_HANDLED(SYS_ID_AA64DFR1_EL1), + HOST_HANDLED(SYS_ID_AA64AFR0_EL1), + HOST_HANDLED(SYS_ID_AA64AFR1_EL1), + HOST_HANDLED(SYS_ID_AA64ISAR0_EL1), + HOST_HANDLED(SYS_ID_AA64ISAR1_EL1), + { SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 }, + { SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 }, + { SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 }, + + HOST_HANDLED(SYS_SCTLR_EL1), + HOST_HANDLED(SYS_ACTLR_EL1), + HOST_HANDLED(SYS_CPACR_EL1), + + HOST_HANDLED(SYS_RGSR_EL1), + HOST_HANDLED(SYS_GCR_EL1), + + /* Scalable Vector Registers are restricted. */ + + HOST_HANDLED(SYS_TTBR0_EL1), + HOST_HANDLED(SYS_TTBR1_EL1), + HOST_HANDLED(SYS_TCR_EL1), + + HOST_HANDLED(SYS_APIAKEYLO_EL1), + HOST_HANDLED(SYS_APIAKEYHI_EL1), + HOST_HANDLED(SYS_APIBKEYLO_EL1), + HOST_HANDLED(SYS_APIBKEYHI_EL1), + HOST_HANDLED(SYS_APDAKEYLO_EL1), + HOST_HANDLED(SYS_APDAKEYHI_EL1), + HOST_HANDLED(SYS_APDBKEYLO_EL1), + HOST_HANDLED(SYS_APDBKEYHI_EL1), + HOST_HANDLED(SYS_APGAKEYLO_EL1), + HOST_HANDLED(SYS_APGAKEYHI_EL1), + + HOST_HANDLED(SYS_AFSR0_EL1), + HOST_HANDLED(SYS_AFSR1_EL1), + HOST_HANDLED(SYS_ESR_EL1), + + HOST_HANDLED(SYS_ERRIDR_EL1), + HOST_HANDLED(SYS_ERRSELR_EL1), + HOST_HANDLED(SYS_ERXFR_EL1), + HOST_HANDLED(SYS_ERXCTLR_EL1), + HOST_HANDLED(SYS_ERXSTATUS_EL1), + HOST_HANDLED(SYS_ERXADDR_EL1), + HOST_HANDLED(SYS_ERXMISC0_EL1), + HOST_HANDLED(SYS_ERXMISC1_EL1), + + HOST_HANDLED(SYS_TFSR_EL1), + HOST_HANDLED(SYS_TFSRE0_EL1), + + HOST_HANDLED(SYS_FAR_EL1), + HOST_HANDLED(SYS_PAR_EL1), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_MAIR_EL1), + HOST_HANDLED(SYS_AMAIR_EL1), + + /* Limited Ordering Regions Registers are restricted. */ + + HOST_HANDLED(SYS_VBAR_EL1), + HOST_HANDLED(SYS_DISR_EL1), + + /* GIC CPU Interface registers are restricted. */ + + HOST_HANDLED(SYS_CONTEXTIDR_EL1), + HOST_HANDLED(SYS_TPIDR_EL1), + + HOST_HANDLED(SYS_SCXTNUM_EL1), + + HOST_HANDLED(SYS_CNTKCTL_EL1), + + HOST_HANDLED(SYS_CCSIDR_EL1), + HOST_HANDLED(SYS_CLIDR_EL1), + HOST_HANDLED(SYS_CSSELR_EL1), + HOST_HANDLED(SYS_CTR_EL0), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_TPIDR_EL0), + HOST_HANDLED(SYS_TPIDRRO_EL0), + + HOST_HANDLED(SYS_SCXTNUM_EL0), + + /* Activity Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_CNTP_TVAL_EL0), + HOST_HANDLED(SYS_CNTP_CTL_EL0), + HOST_HANDLED(SYS_CNTP_CVAL_EL0), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_DACR32_EL2), + HOST_HANDLED(SYS_IFSR32_EL2), + HOST_HANDLED(SYS_FPEXC32_EL2), +}; + +/* + * Handler for protected VM MSR, MRS or System instruction execution in AArch64. + * + * Return 1 if handled, or 0 if not. + */ +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu) +{ + const struct sys_reg_desc *r; + struct sys_reg_params params; + unsigned long esr = kvm_vcpu_get_esr(vcpu); + int Rt = kvm_vcpu_sys_get_rt(vcpu); + + params = esr_sys64_to_params(esr); + params.regval = vcpu_get_reg(vcpu, Rt); + + r = find_reg(¶ms, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs)); + + /* Undefined access (RESTRICTED). */ + if (r == NULL) { + inject_undef(vcpu); + return 1; + } + + /* Handled by the host (HOST_HANDLED) */ + if (r->access == NULL) + return 0; + + /* Handled by hyp: skip instruction if instructed to do so. */ + if (r->access(vcpu, ¶ms, r)) + __kvm_skip_instr(vcpu); + + vcpu_set_reg(vcpu, Rt, params.regval); + return 1; +} + +/* + * Handler for protected VM restricted exceptions. + * + * Inject an undefined exception into the guest and return 1 to indicate that + * it was handled. + */ +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu) +{ + inject_undef(vcpu); + return 1; +} From patchwork Tue Jun 8 14:11:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21FC2C48BCD for ; Tue, 8 Jun 2021 14:12:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C51D61359 for ; Tue, 8 Jun 2021 14:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233166AbhFHON6 (ORCPT ); Tue, 8 Jun 2021 10:13:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232341AbhFHON5 (ORCPT ); Tue, 8 Jun 2021 10:13:57 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75A6EC061787 for ; Tue, 8 Jun 2021 07:12:04 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id z93-20020a0ca5e60000b02901ec19d8ff47so15643206qvz.8 for ; Tue, 08 Jun 2021 07:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/Mt8MqtrF+0ijPrbzI2Iua+6aoYU0GqD+XJoEIBz4Ww=; b=RFQw40Od12GityDnZo7Kd/94h6wYJrr5TxRqbfenOuViXa1LbDJ5HBSpEzf3MaEc6r xVJWDV0RGF2FuaZ4q/eel/c+IsNzhEazQQUUgHrAmLn9mPMZnrUH0CAxhGFIVKckjjR7 TU5ovJ3z8hUNmg58EW+DZe3QIGkXohuJGa9B3abxBrZWiydDI73wFvCJMUUvrLwtDCkt gH63E9BKHYEH1IvpcXEekuA2/G3JlfA+k2eN3Vg5GZ3ZKMFE8kQ1WTrN6vBxYFE4cgik 08dUTzFaOg/LgfM7ZiKWI8JvNOsM2AJ5/3D09dW//e2huy5HxBH9HhFiY9CGNVazOag2 Vp6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/Mt8MqtrF+0ijPrbzI2Iua+6aoYU0GqD+XJoEIBz4Ww=; b=h9mhotl8a8qh8aFeGXuvwqkIALOx41QPsELSF6rLiWOIQ99pkpI9BvS77KzqK61zYh IbdyNEx4UBDK+Xdsef88egymXWDHjX2F4lDti+PQPScEvQo3yhGNfxodtwnj1b2qvtg3 HCrFypCXcnJY/dj+TiuQPn/hLXwHviT7NjjuNMiLVEroFa16gZVPwaUnPOltJ3h2xRbg AgeKaIZKmXJSvtOqOQYG6RIYN/saYpjvhQpzCuCahJYqlZW9Vz1QRd8eVL3FWTIlZBwu clgCBq03Y4rPIxHln0JxipjdasGg9xg5HBGxOw1r/xZiJTkCbJaIWEL8K8nQ1L80tQBu RJ3g== X-Gm-Message-State: AOAM5323MwrSa89y5Hi2LYrzuX3i2rnG+dT+KezHpqVMOCRy5jdqiUnK FDpusmWcURcjtzOFmeMzeoWW9R9RbQ== X-Google-Smtp-Source: ABdhPJxe0XAsOIoUJkVFC3croItftPKyjg8mxh5Z0V9YqYts9/X6Kv08kRsA9WOzWOgAaN4iTUgVVjEXiw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:ff48:: with SMTP id y8mr182068qvt.29.1623161523559; Tue, 08 Jun 2021 07:12:03 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:38 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-11-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 10/13] KVM: arm64: Move sanitized copies of CPU features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the sanitized copies of the CPU feature registers to the recently created sys_regs.c. This consolidates all copies in a more relevant file. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 5 +++++ 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 4b60c0056c04..de734d29e938 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -26,12 +26,6 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_mem; static struct hyp_pool host_s2_dev; -/* - * Copies of the host's CPU features registers holding sanitized values. - */ -u64 id_aa64mmfr0_el1_sys_val; -u64 id_aa64mmfr1_el1_sys_val; - static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 890c96315e55..998b1b48b089 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -14,9 +14,14 @@ #include "../../sys_regs.h" +/* + * Copies of the host's CPU features registers holding sanitized values. + */ u64 id_aa64pfr0_el1_sys_val; u64 id_aa64pfr1_el1_sys_val; u64 id_aa64dfr0_el1_sys_val; +u64 id_aa64mmfr0_el1_sys_val; +u64 id_aa64mmfr1_el1_sys_val; u64 id_aa64mmfr2_el1_sys_val; /* From patchwork Tue Jun 8 14:11:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38E63C47082 for ; Tue, 8 Jun 2021 14:13:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2300561078 for ; Tue, 8 Jun 2021 14:13:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233269AbhFHOPA (ORCPT ); Tue, 8 Jun 2021 10:15:00 -0400 Received: from mail-ej1-f74.google.com ([209.85.218.74]:39566 "EHLO mail-ej1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232874AbhFHOO7 (ORCPT ); Tue, 8 Jun 2021 10:14:59 -0400 Received: by mail-ej1-f74.google.com with SMTP id e11-20020a170906080bb02903f9c27ad9f5so6813214ejd.6 for ; Tue, 08 Jun 2021 07:13:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v43My/fh29dQ89VkLemp8JE4rmjkcJppnIwoByZrgds=; b=BEcD2PghOJEFU2oMhSGxgZHmAQsOMZoS8FuEDW7kwr3rf7WZqcDNBj0yUCDuKnmRxB j3MVM8q3slM0Y/Yzva3xon2zKck/Y5iyW1Y/apk76zR18o/FfmDXGyjg1uyn7pkYbDuO QqWXGxMSzrN2GLJoZMdre8Ytq5aI6JYKUCofX8vhb625OmDUg5zeJ2f6jXzGKWhEyaWT vRAF48B27WM0JiqifHSQnXRahuUylLR1LbCreV86TpS8sU6h8/xTlL+2xpxRe8EFWRX/ ZMD5NyzAaA3VNCF73eRjDwaHrR8MVzEL79Ek0bJxFvCLpoNPJDakq3EfWkU/blL4BDFZ W3hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v43My/fh29dQ89VkLemp8JE4rmjkcJppnIwoByZrgds=; b=Zx2jW9tQFEYL8Bxvu1PwHAfluMlkq8JGCUX89tTWDzvzmidgPtWIer+R4PDJXcDT1a POYwNvyvEvNecbDCIdmLEmKouq97Bd9p3nbwbHimxyiakyC+SIVQz4kQvf2QAiz+T+Xi BO6LzfpdXW9xstOyg/gjYLnnwdEnnGrFRhU26Gg39qBBZFvKSIeD1wNqvXNVDoYe5H+M TLiNiQHfULpDo5VEDwefqtXnoRbOOckIemoAiCE+aSZbwLKuCswtWZxsXn+l7JG0kzAZ RWL2nFEPdfP6+I2H0wFIJw+N3s2HC37HjsanutATZaRf2VcbKSImCUh94scXr/Ti4Wen JEXQ== X-Gm-Message-State: AOAM531A/r/D4AG3lUeWYHsUyHvEO6VSntrGfYIjROmb+HnsKRwRJUYE tGePzlfK4kTyPy89cxe6vIpdWkiecA== X-Google-Smtp-Source: ABdhPJzHyTAOoCriJLa65Ddn1L9CwZMBcN5UfqwlNYVC38pGMtIMGkU9U+lIMCG6yJr3TTUuIsp+bFjoKA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6402:188:: with SMTP id r8mr25681479edv.75.1623161525939; Tue, 08 Jun 2021 07:12:05 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:39 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-12-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 11/13] KVM: arm64: Trap access to pVM restricted features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Trap accesses to restricted features for VMs running in protected mode. Access to feature registers are emulated, and only supported features are exposed to protected VMs. Accesses to restricted registers as well as restricted instructions are trapped, and an undefined exception is injected into the protected guest. Only affects the functionality of protected VMs. Otherwise, should not affect non-protected VMs when KVM is running in protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 3 + arch/arm64/kvm/hyp/nvhe/switch.c | 105 ++++++++++++++++++++---- 2 files changed, 94 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index f5d3d1da0aec..d9f087ed6e02 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -33,6 +33,9 @@ extern struct exception_table_entry __start___kvm_ex_table; extern struct exception_table_entry __stop___kvm_ex_table; +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu); +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu); + /* Check whether the FP regs were dirtied while in the host-side run loop: */ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 967a3ad74fbd..48d5f780fe64 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,12 +34,63 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +/* + * Set EL2 configuration registers to trap restricted register accesses and + * instructions for protected VMs. + * + * Should be called right before vcpu entry to restrict its impact only to the + * protected guest. + */ +static void __activate_traps_pvm(struct kvm_vcpu *vcpu) +{ + u64 mdcr; + u64 hcr; + u64 cptr; + + if (!kvm_vm_is_protected(kern_hyp_va(vcpu->kvm))) + return; + + mdcr = read_sysreg(mdcr_el2); + hcr = read_sysreg(hcr_el2); + cptr = read_sysreg(cptr_el2); + + hcr |= HCR_TID3 | /* Feature Registers */ + HCR_TLOR | /* LOR */ + HCR_RW | /* AArch64 EL1 only */ + HCR_TERR | /* RAS */ + HCR_ATA | HCR_TID5 | /* Memory Tagging */ + HCR_TACR | HCR_TIDCP | HCR_TID1; /* Implementation defined */ + + hcr &= ~(HCR_DCT | /* Memory Tagging */ + HCR_FIEN | /* RAS */ + HCR_AMVOFFEN); /* Disables AMU registers virtualization */ + + /* Debug and Trace */ + mdcr |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE | + MDCR_EL2_TDOSA | MDCR_EL2_TDCC | MDCR_EL2_TTRF | + MDCR_EL2_TPM | MDCR_EL2_TPMCR | + MDCR_EL2_TPMS; /* SPE */ + + mdcr &= ~(MDCR_EL2_HPME | MDCR_EL2_MTPME | /* PMU */ + (MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT)); /* SPE */ + + cptr |= CPTR_EL2_TTA | /* Trace */ + CPTR_EL2_TAM | /* AMU */ + CPTR_EL2_TZ; /* SVE */ + + /* __deactivate_traps() restores these registers. */ + write_sysreg(mdcr, mdcr_el2); + write_sysreg(hcr, hcr_el2); + write_sysreg(cptr, cptr_el2); +} + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; ___activate_traps(vcpu); __activate_traps_common(vcpu); + __activate_traps_pvm(vcpu); val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; @@ -165,30 +216,56 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +/** + * Handle system register accesses for protected VMs. + * + * Return 1 if handled, or 0 if not. + */ +static int handle_pvm_sys64(struct kvm_vcpu *vcpu) +{ + if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm))) + return kvm_handle_pvm_sys64(vcpu); + else + return 0; +} + +/** + * Handle restricted feature accesses for protected VMs. + * + * Return 1 if handled, or 0 if not. + */ +static int handle_pvm_restricted(struct kvm_vcpu *vcpu) +{ + if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm))) + return kvm_handle_pvm_restricted(vcpu); + else + return 0; +} + typedef int (*exit_handle_fn)(struct kvm_vcpu *); static exit_handle_fn hyp_exit_handlers[] = { - [0 ... ESR_ELx_EC_MAX] = NULL, + [0 ... ESR_ELx_EC_MAX] = handle_pvm_restricted, [ESR_ELx_EC_WFx] = NULL, - [ESR_ELx_EC_CP15_32] = NULL, - [ESR_ELx_EC_CP15_64] = NULL, - [ESR_ELx_EC_CP14_MR] = NULL, - [ESR_ELx_EC_CP14_LS] = NULL, - [ESR_ELx_EC_CP14_64] = NULL, + [ESR_ELx_EC_CP15_32] = handle_pvm_restricted, + [ESR_ELx_EC_CP15_64] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_MR] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_LS] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_64] = handle_pvm_restricted, [ESR_ELx_EC_HVC32] = NULL, [ESR_ELx_EC_SMC32] = NULL, [ESR_ELx_EC_HVC64] = NULL, [ESR_ELx_EC_SMC64] = NULL, - [ESR_ELx_EC_SYS64] = NULL, - [ESR_ELx_EC_SVE] = NULL, + [ESR_ELx_EC_SYS64] = handle_pvm_sys64, + [ESR_ELx_EC_SVE] = handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] = NULL, [ESR_ELx_EC_DABT_LOW] = NULL, - [ESR_ELx_EC_SOFTSTP_LOW] = NULL, - [ESR_ELx_EC_WATCHPT_LOW] = NULL, - [ESR_ELx_EC_BREAKPT_LOW] = NULL, - [ESR_ELx_EC_BKPT32] = NULL, - [ESR_ELx_EC_BRK64] = NULL, - [ESR_ELx_EC_FP_ASIMD] = NULL, + [ESR_ELx_EC_SOFTSTP_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_WATCHPT_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_BREAKPT_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_BKPT32] = handle_pvm_restricted, + [ESR_ELx_EC_BRK64] = handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] = handle_pvm_restricted, [ESR_ELx_EC_PAC] = NULL, }; From patchwork Tue Jun 8 14:11:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F35CC47082 for ; Tue, 8 Jun 2021 14:12:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7BAC06135D for ; Tue, 8 Jun 2021 14:12:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233185AbhFHOOE (ORCPT ); Tue, 8 Jun 2021 10:14:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233172AbhFHOOD (ORCPT ); Tue, 8 Jun 2021 10:14:03 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 027DAC061787 for ; Tue, 8 Jun 2021 07:12:10 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id u5-20020adf9e050000b029010df603f280so9497462wre.18 for ; Tue, 08 Jun 2021 07:12:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jp6fv0GP5OZMvO++5rX+YJvw+QWeN06SDMdZ+IsoUG0=; b=OPGc9RaxdbE4jC3Nv//QuI2Jsufzx1NecG16pksOv0KGZFsu+jQWG8OJ+DQZ/2CDb7 LFyBKgXQSD1jXRXJSE9x5NaKjdXiIm+GtBtKKFt25YFO9mA79s+SF4YpGXZt1H15X2bx 2mFt2oQz5XWvE06MsNy05geVWsUOMK7e0gvwzyMUMRKcY1GxXVSD6ET7BGv21Oo48p9W 0QAvnB+NR2k52p+qiGx5aDr2ji0pztzmJJSg2gFz9Pxhn4VuD9KJA1FYmyETBfmJXzGS 5o7NPPmVj1Nr/w/77PeoiO4acbt0s0rishsN36LfTVHRZVejvma+yDt8LCXCv2Q7Its8 oHwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jp6fv0GP5OZMvO++5rX+YJvw+QWeN06SDMdZ+IsoUG0=; b=KdysrBn+xCnReZ+Uy1WeBdckR3eAEX3tQlolUTNN9EqmKpB1a2hWfeOU4wn4SPZGV4 iv8pgiHu0UVsTcPkADi272wMKMsYXxMtty34TVL+B+f7s6pVYdUZW+PAssvZqB9YOGW3 9b3X1i5VURfNO1v1TqnKCuPI2DAgq2BKcw0i3TzXCBu0cEGoaILjkDbCuKYdjZN0Z8dM z0Qgr7uzf9AOO2Xi7jRxEIv/BEuUrQI/orbeWUMANGVtN4Jsr76njgErYQSY84n61/N/ qvdBrCTpMvfO1h4oCEE+rovrxhFE32IAccPsLGNa90mqzc9hOjr7GKd2dXHyrnNjmaGy zolg== X-Gm-Message-State: AOAM533/0XuMAVjmCIh6MpVYWp6Q6qEf7ZsDZyhzkmq270pZfidiYEXb tE+Pcsn4UOAIO0FrQfQNvnx39Hm77A== X-Google-Smtp-Source: ABdhPJyHhizcjmweSCmSoFGB2PMDem+2OK12rFn1TS9WE8cTt6h356mgLMjU2sM0rpQ3SM5oD2nnES3tfw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:cc8f:: with SMTP id p15mr4535235wma.111.1623161528421; Tue, 08 Jun 2021 07:12:08 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:40 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-13-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 12/13] KVM: arm64: Handle protected guests at 32 bits From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Protected KVM does not support protected AArch32 guests. However, it is possible for the guest to force run AArch32, potentially causing problems. Add an extra check so that if the hypervisor catches the guest doing that, it can prevent the guest from running again by resetting vcpu->arch.target and returning ARM_EXCEPTION_IL. Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric AArch32 systems") Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index d9f087ed6e02..672801f79579 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -447,6 +447,26 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR); } + /* + * Protected VMs are not allowed to run in AArch32. The check below is + * based on the one in kvm_arch_vcpu_ioctl_run(). + * The ARMv8 architecture doesn't give the hypervisor a mechanism to + * prevent a guest from dropping to AArch32 EL0 if implemented by the + * CPU. If the hypervisor spots a guest in such a state ensure it is + * handled, and don't trust the host to spot or fix it. + */ + if (unlikely(is_nvhe_hyp_code() && + kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) && + vcpu_mode_is_32bit(vcpu))) { + /* + * As we have caught the guest red-handed, decide that it isn't + * fit for purpose anymore by making the vcpu invalid. + */ + vcpu->arch.target = -1; + *exit_code = ARM_EXCEPTION_IL; + goto exit; + } + /* * We're using the raw exception code in order to only process * the trap if no SError is pending. We will come back to the From patchwork Tue Jun 8 14:11:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12306983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAEAEC47082 for ; Tue, 8 Jun 2021 14:13:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B208760FE9 for ; Tue, 8 Jun 2021 14:13:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233273AbhFHOPV (ORCPT ); Tue, 8 Jun 2021 10:15:21 -0400 Received: from mail-wm1-f74.google.com ([209.85.128.74]:59924 "EHLO mail-wm1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233274AbhFHOPU (ORCPT ); Tue, 8 Jun 2021 10:15:20 -0400 Received: by mail-wm1-f74.google.com with SMTP id n8-20020a05600c3b88b02901b6e5bcd841so129949wms.9 for ; Tue, 08 Jun 2021 07:13:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=k1l2n59g13iNu83bUTxARpOhgMKflgQ04MOsDPUh0Ic=; b=HKMRwVfndsrVEdmwwjp+AJfDUAU/7SHKuQu+f50Ph5TxJeikrk3f1yLdxBBbNGalK4 1OsQdR3D7asjd+1UcEgIGwjwfNFpdR4INjRegFwmCpjoHIFrG4VisTD9D+yw0wOrihMD u+jS5KLKNo5J22J7NPibJPCz8HM4C4uXx57pgOf5PT9cJ5hlU5DjgyMov1wcyirmTTfd DmnL7lIfo9JWERgn13Mn1FX+q3udhEE6m24+t2J5Y7wWHEcarMFKH81jyvkAkw1iirzX ujxnJb0sk3tt2uylf6jCDqe33eE8qI8MQtloA22FQDfL0qlJumIPI3wJ6LXAx0k2kXcR LVJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=k1l2n59g13iNu83bUTxARpOhgMKflgQ04MOsDPUh0Ic=; b=BXs0sxUFZHzjZGnX0oSWskddjzsMakT3JRAFtLCo4kjJXzQRxhfWl3uUihyDov9oTz lLYcIkuxMMhU2dBBCq+QkQbVboYBF5eRV9lkcaWKSjyqHmj944ExNOHAJxhMJ8VmSHzu +K56TX8s3tvpO8sD8F92TkqxcYnjw734F6XtXvawqsG01xRh9yDAEalyfoXlDFa37GGr rHGb8S+RNOS5rXpG2g69/HoDMDm8g8kS4G8EgctyJF3m521y59cM9QQk99CEzhB5kpJw ERaSTgktHoIKtxsznLfleivPxYX5R4ZLTmc6mYIMeHLIkrAOk48GDYBLjtsU8MhTNwxx 9pgA== X-Gm-Message-State: AOAM532IIgUqHgwtDslpZt1LgeIKIjc23Qki2gq+g0wK5CpJ8vPaT9mp alB9L6dwRfySMW3WPJ2COR4Bcg6J4A== X-Google-Smtp-Source: ABdhPJzXIHfxt5tac7kqNaZAg4dHpDxMG83Oc7gi7GtbAhXwc3s7cpEqjJlIIcQ1S8LZ+Ld/XnKDm6HcjA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:4fd0:: with SMTP id o16mr4554612wmq.50.1623161530591; Tue, 08 Jun 2021 07:12:10 -0700 (PDT) Date: Tue, 8 Jun 2021 15:11:41 +0100 In-Reply-To: <20210608141141.997398-1-tabba@google.com> Message-Id: <20210608141141.997398-14-tabba@google.com> Mime-Version: 1.0 References: <20210608141141.997398-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v1 13/13] KVM: arm64: Check vcpu features at pVM creation From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check that a protected VM is not setting any of the unsupported features when it's created. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/pkvm.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index cf624350fb27..5e58d604faec 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -88,10 +88,41 @@ static void pkvm_teardown_firmware_slot(struct kvm *kvm) kvm->arch.pkvm.firmware_slot = NULL; } +/* + * Check that no unsupported features are enabled for the protected VM's vcpus. + * + * Return 0 if all features enabled for all vcpus are supported, or -EINVAL if + * one or more vcpus has one or more unsupported features. + */ +static int pkvm_check_features(struct kvm *kvm) +{ + int i; + const struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + /* + * No support for: + * - AArch32 state for protected VMs + * - Performance Monitoring + * - Scalable Vectors + */ + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features) || + test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) || + test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) + return -EINVAL; + } + + return 0; +} + static int pkvm_enable(struct kvm *kvm, u64 slotid) { int ret; + ret = pkvm_check_features(kvm); + if (ret) + return ret; + ret = pkvm_init_firmware_slot(kvm, slotid); if (ret) return ret;