From patchwork Mon Jul 19 16:03:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FAC3C07E9B for ; Mon, 19 Jul 2021 16:28:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D0C661448 for ; Mon, 19 Jul 2021 16:28:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346995AbhGSPrZ (ORCPT ); Mon, 19 Jul 2021 11:47:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350252AbhGSPpp (ORCPT ); Mon, 19 Jul 2021 11:45:45 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B356C0225AF for ; Mon, 19 Jul 2021 08:38:07 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id m21-20020a50ef150000b029039c013d5b80so9548591eds.7 for ; Mon, 19 Jul 2021 09:03:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7wkujqQey68d/8KAgY7QxLbL3Tk3E6zx35InqbFhZk8=; b=cI1pqIS70PpjVvFSr5jOtpULbEY43W4Q5k3H2M3+Pe6zycfUQ8J+47luWmyhdcQtj7 LNxHqrAvDi7k9ZrQ99sbopcAZrp3nHUyLPCcBPNXNHZvx6IO2Sjq+TT+uUNxgcRWX4D0 hrK/zoz2PS3l1iEKbpT8oLy9lelodxEKSRRmt9VX3F0p6thfu4rojU+4Yd/qLgMHp/ez VpGefh79ZC7n9pleIODX29EDlrGxzFeLjbnMF+IlxpHlSfvFqr0ESZ0EtoJF7DVEoOGw ZvzS7Ubj2jPwojVsgNF86jX6jRATNvOjxY9RH1HXtLXAJxT+9Z3pMek+apc2yM7iYbcI +qxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7wkujqQey68d/8KAgY7QxLbL3Tk3E6zx35InqbFhZk8=; b=MN8xXu6jl1AfA11M5yYPqEdmQ5gDQz99ATAcHFH6HMtxVBsKWejKfTYq1aTn4hCPrC XBcu4xWP6uS5lug0AOHkhKeaQ9IyJpDjBfYyCFgkcP+9HolHbuQkBXqsL2yqiASYS+zJ NzZhAKeAiXagoPjBv+TfA0uo8MSojcbjU0CIV4+Vf1Qxs+zs+iF9yrhI3qb+3DHiMjEi L+Res7kkQRxdC2ql8gQB51GRu5qv+oNYTcC7wgcKDxZkkzDwL9LHp6VRieht3hhhu5NQ c2AmtDBakNGA/WD3K5RnON4swCHktnT9RMa5Ff81mY25crQV+yBGgZgf75M+ex5SNOwE relQ== X-Gm-Message-State: AOAM533CrfJCBULfS1g0Q8hwqFrA+mbUq+R+LsxU4x9jE7Hef2rXGZ0G uvzenxDN4ocC46aUfNHCrB8/FljJ0Q== X-Google-Smtp-Source: ABdhPJw45jb2VfQ2x2siX2+8lyhQyXfGajRdkU6yAqPnufGi+Y4rw/053/SjjPHbtJCrsvDdQe7auRMpSw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6402:2228:: with SMTP id cr8mr35901384edb.309.1626710630758; Mon, 19 Jul 2021 09:03:50 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:32 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-2-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a function to check whether a VM is protected (under pKVM). Since the creation of protected VMs isn't enabled yet, this is a placeholder that always returns false. The intention is for this to become a check for protected VMs in the future (see Will's RFC [*]). No functional change intended. Signed-off-by: Fuad Tabba [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/ Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 41911585ae0c..347781f99b6a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -771,6 +771,11 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); +static inline bool kvm_vm_is_protected(struct kvm *kvm) +{ + return false; +} + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); From patchwork Mon Jul 19 16:03:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C266FC07E9B for ; Mon, 19 Jul 2021 16:28:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAC5161448 for ; Mon, 19 Jul 2021 16:28:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346872AbhGSPrX (ORCPT ); Mon, 19 Jul 2021 11:47:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350251AbhGSPpp (ORCPT ); Mon, 19 Jul 2021 11:45:45 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2955AC0225B0 for ; Mon, 19 Jul 2021 08:38:09 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id m6-20020a05600c4f46b0290205f5e73b37so174907wmq.3 for ; Mon, 19 Jul 2021 09:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zOaafQqyYiyHibWQyje/Aynnw7U7p4mRVaT60bMRHgo=; b=nWlb7jQnf7aLbh5SVAzsTk/wgS94APyUEE+NCtodxaQ41ISqv64Pg4TTOdKa31klUM 2IFPHGIdLyBb1r/924z1/CVUfJ6Gbh9MyThu10slfEElIanSTaK2Xrvme76pgvmbSFCH N5nni3DLR4raUtJ524bPh8IxjbJL+3zBUV2mUbvmiWkw5MAmQuE1OaDOKDIh57Knac5k ft7C1wO51OpA4Fbmn8A5ZC5EVkDZqaUXb7cLz0/gmcHieLgWkokq9ChNC4M8t8wtaMzp DjWp23vlGA2BSdE5f7Ci94dCtmP8KdhytQACldFy38d2towoK/fUUMqNhfVSX0RwLwHW AmbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zOaafQqyYiyHibWQyje/Aynnw7U7p4mRVaT60bMRHgo=; b=f2/2bEQUVToWipYzZzgndkuIh6DYSfNsAZelkORMuOakSpJ/9qd6nmXIH135Q2C/9Q omFe+RcC+CZBjJlxzpV9Ps7l9WmZbYpa3ZWiAKDTDdRWLK3ZhtDG27BLkz6STZNt7SAK jnXxWIEvg1dFQHE0PTU1ysS0WnOZUH2FTFQVuc9WfT8Po1b+b4ZkaFhcbVDNKHC3dJ2q n2+uPZlKp0siqE9jZraVEDp+Z4jCXx8p6VaFzdWUwCpOmhYDic4KsJiNV/XPVdctJfd2 41V59gwiZRoZkvlo5Lf2lWmDKkqQfcLP6RqGs9NuqunGRHKP1ePfO7Ka5v2AT0xTYtxd Gdog== X-Gm-Message-State: AOAM530zXCQE1xKrdjKnY+pGyUQV4/8SPhjzkQROi/d0uONSMes7XRH2 XfW96/aeLQISHF7FeRcg5TIJVotJxg== X-Google-Smtp-Source: ABdhPJzgAMHslu1b44ODrq5vEN3TN0CYJuhq+aKYnb5flV5+UR/uab1rElFZAm5fgeicP7WXI7gWQnwCFw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c083:: with SMTP id r3mr26930447wmh.97.1626710632666; Mon, 19 Jul 2021 09:03:52 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:33 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-3-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove trailing whitespace from comment in trap_dbgauthstatus_el1(). No functional change intended. Signed-off-by: Fuad Tabba Acked-by: Will Deacon --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f6f126eb6ac1..80a6e41cadad 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu, /* * We want to avoid world-switching all the DBG registers all the * time: - * + * * - If we've touched any debug register, it is likely that we're * going to touch more of them. It then makes sense to disable the * traps and start doing the save/restore dance * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is * then mandatory to save/restore the registers, as the guest * depends on them. - * + * * For this, we use a DIRTY bit, indicating the guest has modified the * debug registers, used as follow: * From patchwork Mon Jul 19 16:03:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E014C07E9B for ; Mon, 19 Jul 2021 16:28:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55A8161625 for ; Mon, 19 Jul 2021 16:28:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243017AbhGSPrn (ORCPT ); Mon, 19 Jul 2021 11:47:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350266AbhGSPpq (ORCPT ); Mon, 19 Jul 2021 11:45:46 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FE16C0225B1 for ; Mon, 19 Jul 2021 08:38:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 132-20020a25158a0000b029055791ebe1e6so26126547ybv.20 for ; Mon, 19 Jul 2021 09:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gj5azJ/ULXVcRTckLTYz4tzYt46atrxa1NlVggC1Nmc=; b=LHtfRYwLX9nQwkfUKOswjmlWdg5rsMjtYduQo4pdQJPUIUGISuTCceoDTnWBdXbtKY PcF5qwXUmbJQ3xFqiQrZancOO9Kye/UKkIIUd+uqStOA+EbRjShQ2VT1U/VHALXwB/Yr bCt/x2uRgW63pw34dMXL7ENNzIYkf8O0kM12GdNvSZEQ3nU5gY+Zzco0i+kj1xrCg1rI z4hgo9VtaoncdacrrY8f+3APWGzbakN+pQ7zLc5kTSUXNZWYQDT1AnOoDB4l3287TbUu 7H+GP9/co87qpbeGNDVnwjsd1RB/P+ES/zJp9dsnmCQSa97toNB2NUKNfnI8w3MLB0AU rXDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gj5azJ/ULXVcRTckLTYz4tzYt46atrxa1NlVggC1Nmc=; b=skqvRhxLua2AM6Og/o5df6uzAKUjgzSvf4ZOR114v+3U1cn7z/f8xjrNFKLym4PA+t J4ktyXHd8QDyXAP/HozO91U9zFa9JPIIPDhkhGyseWaPhhnq3fBf3P3DEJRibEvA+nmA Vc5pnlRHR+BVL7Og++NKJB6om55NQnXlYgCapYiQ92GestTDnH0Z5OFjjljl+B6nxSB+ ijBD48KAvuBjXXe/WN7ovEA7nDUgL4abJ6FvJS95igha+2oIZr30D787/Wm2gAWp+68h Wx7CyjpjRFzvob1MrLGbkryvBYH8mpl9uz7McfIGsXHVFilvOawwPSfQXqkFYdAIazkb nKqA== X-Gm-Message-State: AOAM533HhmycSyPuoO294JKAoxwjlNvQD5VN0ErzBii2Br2pvvTLRUrC UbNaL2koxrLAXyX91g4hZpgB8oiELg== X-Google-Smtp-Source: ABdhPJyvBo0xrNE2JiKYrS+c1sAe20vSLOrHLMh9ty2cNjsLsVeW9KuuuUUBMIH7s3qqfMMxGZ4+xHb/7g== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a25:e08a:: with SMTP id x132mr33206480ybg.511.1626710634535; Mon, 19 Jul 2021 09:03:54 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:34 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-4-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix the places in KVM that treat MDCR_EL2 as a 32-bit register. More recent features (e.g., FEAT_SPEv1p2) use bits above 31. No functional change intended. Acked-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 20 ++++++++++---------- arch/arm64/include/asm/kvm_asm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/debug.c | 2 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 2 +- arch/arm64/kvm/hyp/vhe/debug-sr.c | 2 +- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index d436831dd706..6a523ec83415 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -281,18 +281,18 @@ /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) #define MDCR_EL2_E2TB_SHIFT (UL(24)) -#define MDCR_EL2_TTRF (1 << 19) -#define MDCR_EL2_TPMS (1 << 14) +#define MDCR_EL2_TTRF (UL(1) << 19) +#define MDCR_EL2_TPMS (UL(1) << 14) #define MDCR_EL2_E2PB_MASK (UL(0x3)) #define MDCR_EL2_E2PB_SHIFT (UL(12)) -#define MDCR_EL2_TDRA (1 << 11) -#define MDCR_EL2_TDOSA (1 << 10) -#define MDCR_EL2_TDA (1 << 9) -#define MDCR_EL2_TDE (1 << 8) -#define MDCR_EL2_HPME (1 << 7) -#define MDCR_EL2_TPM (1 << 6) -#define MDCR_EL2_TPMCR (1 << 5) -#define MDCR_EL2_HPMN_MASK (0x1F) +#define MDCR_EL2_TDRA (UL(1) << 11) +#define MDCR_EL2_TDOSA (UL(1) << 10) +#define MDCR_EL2_TDA (UL(1) << 9) +#define MDCR_EL2_TDE (UL(1) << 8) +#define MDCR_EL2_HPME (UL(1) << 7) +#define MDCR_EL2_TPM (UL(1) << 6) +#define MDCR_EL2_TPMCR (UL(1) << 5) +#define MDCR_EL2_HPMN_MASK (UL(0x1F)) /* For compatibility with fault code shared with 32-bit */ #define FSC_FAULT ESR_ELx_FSC_FAULT diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 9f0bf2109be7..63ead9060ab5 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void); extern void __vgic_v3_write_vmcr(u32 vmcr); extern void __vgic_v3_init_lrs(void); -extern u32 __kvm_get_mdcr_el2(void); +extern u64 __kvm_get_mdcr_el2(void); #define __KVM_EXTABLE(from, to) \ " .pushsection __kvm_ex_table, \"a\"\n" \ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 347781f99b6a..4d2d974c1522 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -289,7 +289,7 @@ struct kvm_vcpu_arch { /* HYP configuration */ u64 hcr_el2; - u32 mdcr_el2; + u64 mdcr_el2; /* Exception Information */ struct kvm_vcpu_fault_info fault; diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index d5e79d7ee6e9..db9361338b2a 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -21,7 +21,7 @@ DBG_MDSCR_KDE | \ DBG_MDSCR_MDE) -static DEFINE_PER_CPU(u32, mdcr_el2); +static DEFINE_PER_CPU(u64, mdcr_el2); /** * save/restore_guest_debug_regs diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..df361d839902 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu) __debug_switch_to_host_common(vcpu); } -u32 __kvm_get_mdcr_el2(void) +u64 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c index f1e2e5a00933..289689b2682d 100644 --- a/arch/arm64/kvm/hyp/vhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c @@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu) __debug_switch_to_host_common(vcpu); } -u32 __kvm_get_mdcr_el2(void) +u64 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } From patchwork Mon Jul 19 16:03:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED27FC07E9D for ; Mon, 19 Jul 2021 16:28:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D90EA616E8 for ; Mon, 19 Jul 2021 16:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240546AbhGSPrj (ORCPT ); Mon, 19 Jul 2021 11:47:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350258AbhGSPpq (ORCPT ); Mon, 19 Jul 2021 11:45:46 -0400 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AEEDC0225B4 for ; Mon, 19 Jul 2021 08:38:13 -0700 (PDT) Received: by mail-wm1-x34a.google.com with SMTP id 1-20020a05600c0201b029022095f349f3so142589wmi.0 for ; Mon, 19 Jul 2021 09:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=k10DPudObTvcj74PA9oJ4czrG2bf/iAmYaVF6bRkhko=; b=BPSFlH5yAUOM5EqitdPB3laSW+FRnNWn2lMenmzoSKl27Cvko3VreV9gcyS/C6J0HR ftiwwxS3VzE+ufspgaVIXd/18+Mnb++5SN5atFtMk9JTNyLuHjVfsffTqfhXXKFtxnQn vyL2kEoKt6INYuLNeMb0i517AmrACvOQMrsLA34LiYFNPvevZrY6W9bKO0Snob3nWrpw MtUOlJNHX6tC7etd5Uld5WJZqkSo6TmdYef+MefU3jc3GV+HlmZfQ2fbTUNGFcxH7pCP VOIuBF9TfW4mnHTY4Ktp5AGgwmLd0xvteF5dEK55U4E5DGzE5a6tJLKKQeYP/lQ7M8dK Glxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=k10DPudObTvcj74PA9oJ4czrG2bf/iAmYaVF6bRkhko=; b=Zti8H0CWHnadSicHstQdob5r0WeecMeL0h/D/6amvM4DA0GNAmxtX3dVy49wn70G08 nj7lhUyqlZGzGMsxODqr97M8gJ2gWxVRtXZ3p5oXOiuQM+PrTZs68au5K0wUSMT3E8z9 WA+gBRE14aG0xukATnLzx810JiQ4Or3dDPcFjMiI96DWKWYZI5aVh8OTcuwilIxD9AHX MhB28qcy+fF0wj0h/4x7XDhBFo0QUfOEqdxqDxMaQCRQ3VkIWIot3Xqeel5TFyyirqol TLBWQWV7qHiClqBH73030HWwp0clNesxtfH6HjAsljgVseCFF9Bp4HIjVbo03vBIrAGV Wu/g== X-Gm-Message-State: AOAM533xc81yfPESJtA34qfhZTNwWEZLndyIcITq6LZ/VzKMZPzYliwC 9IPU50fNLacqSy7+++HwRTXwb3rJLw== X-Google-Smtp-Source: ABdhPJwJ28BjxRQydXSdv3kpZHN6Lpf07TsYNXVlePcsPh5Xqgyd/MpQ7FE52+v1uJMUFASUHF3DQsPCaQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:4048:: with SMTP id w8mr30885618wrp.82.1626710636444; Mon, 19 Jul 2021 09:03:56 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:35 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-5-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 04/15] KVM: arm64: Fix names of config register fields From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Change the names of hcr_el2 register fields to match the Arm Architecture Reference Manual. Easier for cross-referencing and for grepping. Also, change the name of CPTR_EL2_RES1 to CPTR_NVHE_EL2_RES1, because res1 bits are different for VHE. No functional change intended. Acked-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 6a523ec83415..a928b2dc0b0f 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -32,9 +32,9 @@ #define HCR_TVM (UL(1) << 26) #define HCR_TTLB (UL(1) << 25) #define HCR_TPU (UL(1) << 24) -#define HCR_TPC (UL(1) << 23) +#define HCR_TPC (UL(1) << 23) /* HCR_TPCP if FEAT_DPB */ #define HCR_TSW (UL(1) << 22) -#define HCR_TAC (UL(1) << 21) +#define HCR_TACR (UL(1) << 21) #define HCR_TIDCP (UL(1) << 20) #define HCR_TSC (UL(1) << 19) #define HCR_TID3 (UL(1) << 18) @@ -61,7 +61,7 @@ * The bits we set in HCR: * TLOR: Trap LORegion register accesses * RW: 64bit by default, can be overridden for 32bit VMs - * TAC: Trap ACTLR + * TACR: Trap ACTLR * TSC: Trap SMC * TSW: Trap cache operations by set/way * TWE: Trap WFE @@ -76,7 +76,7 @@ * PTW: Take a stage2 fault if a stage1 walk steps in device memory */ #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \ - HCR_BSU_IS | HCR_FB | HCR_TAC | \ + HCR_BSU_IS | HCR_FB | HCR_TACR | \ HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \ HCR_FMO | HCR_IMO | HCR_PTW ) #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF) @@ -275,8 +275,8 @@ #define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) -#define CPTR_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 */ -#define CPTR_EL2_DEFAULT CPTR_EL2_RES1 +#define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ +#define CPTR_EL2_DEFAULT CPTR_NVHE_EL2_RES1 /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) From patchwork Mon Jul 19 16:03:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5663AC07E95 for ; Mon, 19 Jul 2021 16:28:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 461326192D for ; Mon, 19 Jul 2021 16:28:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347450AbhGSPsA (ORCPT ); Mon, 19 Jul 2021 11:48:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350278AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9866EC0ABCA3 for ; Mon, 19 Jul 2021 08:38:15 -0700 (PDT) Received: by mail-wm1-x34a.google.com with SMTP id h22-20020a7bc9360000b0290215b0f3da63so2005682wml.3 for ; Mon, 19 Jul 2021 09:03:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GDqskojL7dyXpCREyAWDqO7MHA/WU9T+kxKaEf44dB0=; b=rZVHJk2Z0+nWKmBUX0mm3cqH9vYGulzcqpX9aqYyTTQT1WF91Ke37EurEKLmcF2DCL keDGWcvL5Eov7/qcjCIs3tIX0eIw8v47J+bkm4uBrX21ondXLKbqPPfETvkh3JmN2HF3 lVQvFfjfEGWwWt7c6h7aJLYmb3uDiLRE8iD5Lu8vDIP0tbA4RgerBAn2zoxcd7EWdCIv EwgRukwaJ1agHHeJR5GyAzTxZG9ktl2QwQ7WvZqtSwsT0XTiFq6A1FQTVpebID91Ae9Z sQahfVdSSjsOIgNUF51cL7t1EEcrRajBDHGZl5vxsF1EMfXHD5Sn6PPayoPHy/NhT5zZ nEyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GDqskojL7dyXpCREyAWDqO7MHA/WU9T+kxKaEf44dB0=; b=sHpk2S7DZ3jsfvpvptSzne78eIMHvS60Dey+qzTA2rEpe6QeCt5smGKXYar7tjFgXz NbeCkWRXmPcLPl6nlc809aWc7AALeWNIZsAko20Y2dAxFxH6d4cAohSoyiZC1Ap8/AYv 5XCJtmZcpbaMc6fHHdi+oxib50Uw39kzKhxAJQedz4oqp8ywEvIrbJY5UqPqf4hhGwHP 27pfX13lOE1XMTi8DGZvB4Q6me+7h6532ib+PkJ6r1U6zjlZyUwdt4QTMgBW5vOqBuWx doe8nfEK73yiS7U/Ajd+ef3xHGTIMGuYH3pV0kxQTkNhVV9al1WIr1nU2RgBcasq/TKB 3kzA== X-Gm-Message-State: AOAM530E47Y+6VQ32KGD1ZhC//dHewaqSfi1jKs7r+wqpfAM87C8Bwlp IpDIWzDyf5uenYStHDC21JfteZjogA== X-Google-Smtp-Source: ABdhPJwp/L7WERVfHFTFWu/xpgqq2mIayL4vupqwygWgiPvc5ZG2Z0+cqTWmTKkJweVf7nAnOAngrv1AQg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:35c2:: with SMTP id c185mr19332555wma.73.1626710638515; Mon, 19 Jul 2021 09:03:58 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:36 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-6-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor sys_regs.h and sys_regs.c to make it easier to reuse common code. It will be used in nVHE in a later patch. Note that the refactored code uses __inline_bsearch for find_reg instead of bsearch to avoid copying the bsearch code for nVHE. No functional change intended. Signed-off-by: Fuad Tabba Reviewed-by: Andrew Jones Acked-by: Will Deacon Op0, (u32)(x)->Op1, \ - (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) - static bool read_from_write_only(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -#define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT)) - /* Read a sanitised cpufeature ID register by sys_reg_desc */ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz) @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, return 0; } -static int match_sys_reg(const void *key, const void *elt) -{ - const unsigned long pval = (unsigned long)key; - const struct sys_reg_desc *r = elt; - - return pval - reg_to_encoding(r); -} - -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params, - const struct sys_reg_desc table[], - unsigned int num) -{ - unsigned long pval = reg_to_encoding(params); - - return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); -} - int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) { kvm_inject_undefined(vcpu); @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) trace_kvm_handle_sys_reg(esr); - params.Op0 = (esr >> 20) & 3; - params.Op1 = (esr >> 14) & 0x7; - params.CRn = (esr >> 10) & 0xf; - params.CRm = (esr >> 1) & 0xf; - params.Op2 = (esr >> 17) & 0x7; + params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); - params.is_write = !(esr & 1); ret = emulate_sys_reg(vcpu, ¶ms); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 9d0621417c2a..cc0cc95a0280 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -11,6 +11,12 @@ #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__ #define __ARM64_KVM_SYS_REGS_LOCAL_H__ +#include + +#define reg_to_encoding(x) \ + sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ + (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) + struct sys_reg_params { u8 Op0; u8 Op1; @@ -21,6 +27,14 @@ struct sys_reg_params { bool is_write; }; +#define esr_sys64_to_params(esr) \ + ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3, \ + .Op1 = ((esr) >> 14) & 0x7, \ + .CRn = ((esr) >> 10) & 0xf, \ + .CRm = ((esr) >> 1) & 0xf, \ + .Op2 = ((esr) >> 17) & 0x7, \ + .is_write = !((esr) & 1) }) + struct sys_reg_desc { /* Sysreg string for debug */ const char *name; @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1, return i1->Op2 - i2->Op2; } +static inline int match_sys_reg(const void *key, const void *elt) +{ + const unsigned long pval = (unsigned long)key; + const struct sys_reg_desc *r = elt; + + return pval - reg_to_encoding(r); +} + +static inline const struct sys_reg_desc * +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], + unsigned int num) +{ + unsigned long pval = reg_to_encoding(params); + + return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); +} + const struct sys_reg_desc *find_reg_by_id(u64 id, struct sys_reg_params *params, const struct sys_reg_desc table[], From patchwork Mon Jul 19 16:03:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 770B2C07E95 for ; Mon, 19 Jul 2021 16:28:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5DD3161629 for ; Mon, 19 Jul 2021 16:28:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346324AbhGSPsO (ORCPT ); Mon, 19 Jul 2021 11:48:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350277AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0A5EC0ABCA4 for ; Mon, 19 Jul 2021 08:38:17 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id p9-20020a7bcc890000b02902190142995dso4125997wma.4 for ; Mon, 19 Jul 2021 09:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uVNKfeEG4b+Qdmha097ZuRQElAnejEBo7Qe/StGCOZM=; b=DB0fOgJNXJ5E70tS+AsuZ/8rTH252TGNNrznMxapYyCfvmyUQVkYIodcjacdGvbshM FYS/fHQ/w+3hODQfs8HalbOa4v1gDDGNxZqldxaSHFpGr1H953Dfuv8uRQK35Y2JEb3T nKKig4yBHxW3bpl9bP100AW17eSDryRG4ouQrZu7Q28jXu5SVKxEYzYXfJGkYKbeNlzd ozfMugLUuEb8ft6Yx9hXwgX0k/qs+yfPjWFhYEcifYEVWW3eHjSUW70RiV/+mx35igxv Tvj/IYXKqi/bNEzZZRxE+D6TGsuWbcAx1SrKpm4e2JSRT329y78NahiFAERQkeAC61mQ kZAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uVNKfeEG4b+Qdmha097ZuRQElAnejEBo7Qe/StGCOZM=; b=I+aknG+qomxBS3fTAnQyhZB6liWMLbGDfb4qsJ/MuEpAGxksJ95KrLbc6419WIW7rq tPHYwDLIqwjdZjU4sKJLmc0Oxxh4av08Lal1qHKZg4vBQ7qOJ/n66EbQN5NuSmZfS2eY NVIw/QASLZZAmND5MEBDPilUX4yMz4Ops3+abf6UUyZsjh07FWq11nF30nWgHdLYnvv2 OFq8fG4CMc0MVDsSZz72hmW9LjFB8diYCQTOn8BJONNyQvqcyVQisGa4HwLPDzChI3Pi LjnzdKGntrpwcbbF1Eqki/VcCKCfzJUnDX2VPR/fkHuZUku4RLSbJfxufQehWbTuabUI JAgg== X-Gm-Message-State: AOAM532mgQZwPKbvEfDd9T6yGC9rgOA+8YQ5I+Gh0dat9e/gSdX4PJGR Ws2SpJaQBSBiG8Fo090nGxz1wLEodQ== X-Google-Smtp-Source: ABdhPJzXIU8ksmYOLwiK37hqDGGMkNRvl2jPeHRvgKtrSj3QhSyZyJU4Cywip2U8/yLAHnQqc9Ep1pEz1g== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:1c7:: with SMTP id 190mr26767213wmb.170.1626710640541; Mon, 19 Jul 2021 09:04:00 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:37 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-7-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On deactivating traps, restore the value of mdcr_el2 from the newly created and preserved host value vcpu context, rather than directly reading the hardware register. Up until and including this patch the two values are the same, i.e., the hardware register and the vcpu one. A future patch will be changing the value of mdcr_el2 on activating traps, and this ensures that its value will be restored. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 5 ++++- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++++- arch/arm64/kvm/hyp/nvhe/switch.c | 11 ++--------- arch/arm64/kvm/hyp/vhe/switch.c | 12 ++---------- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 2 +- 6 files changed, 15 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4d2d974c1522..76462c6a91ee 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -287,10 +287,13 @@ struct kvm_vcpu_arch { /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; - /* HYP configuration */ + /* Values of trap registers for the guest. */ u64 hcr_el2; u64 mdcr_el2; + /* Values of trap registers for the host before guest entry. */ + u64 mdcr_el2_host; + /* Exception Information */ struct kvm_vcpu_fault_info fault; diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..657d0c94cf82 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr); #ifndef __KVM_NVHE_HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); -void deactivate_traps_vhe_put(void); +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); #endif u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..a0e78a6027be 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } + + vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2); write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } -static inline void __deactivate_traps_common(void) +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu) { + write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2); + write_sysreg(0, hstr_el2); if (kvm_arm_support_pmu_v3()) write_sysreg(0, pmuserenr_el0); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..1778593a08a9 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { extern char __kvm_hyp_host_vector[]; - u64 mdcr_el2, cptr; + u64 cptr; ___deactivate_traps(vcpu); - mdcr_el2 = read_sysreg(mdcr_el2); - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { u64 val; @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) isb(); } - __deactivate_traps_common(); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK; - mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; - mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; + __deactivate_traps_common(vcpu); - write_sysreg(mdcr_el2, mdcr_el2); write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); cptr = CPTR_EL2_DEFAULT; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index b3229924d243..0d0c9550fb08 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu) __activate_traps_common(vcpu); } -void deactivate_traps_vhe_put(void) +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) { - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK | - MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT | - MDCR_EL2_TPMS; - - write_sysreg(mdcr_el2, mdcr_el2); - - __deactivate_traps_common(); + __deactivate_traps_common(vcpu); } /* Switch to the guest for VHE systems running in EL2 */ diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 2a0b8c88d74f..007a12dd4351 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) struct kvm_cpu_context *host_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - deactivate_traps_vhe_put(); + deactivate_traps_vhe_put(vcpu); __sysreg_save_el1_state(guest_ctxt); __sysreg_save_user_state(guest_ctxt); From patchwork Mon Jul 19 16:03:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F31FC07E95 for ; Mon, 19 Jul 2021 16:28:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 29B0761490 for ; Mon, 19 Jul 2021 16:28:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236886AbhGSPrc (ORCPT ); Mon, 19 Jul 2021 11:47:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350262AbhGSPpq (ORCPT ); Mon, 19 Jul 2021 11:45:46 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFBCBC0ABCA5 for ; Mon, 19 Jul 2021 08:38:19 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id h22-20020a7bc9360000b0290215b0f3da63so2005704wml.3 for ; Mon, 19 Jul 2021 09:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U8tD0ZWRxueOiZnXn8UX8nsv6z5g8QfT23T+iieg6fI=; b=ea29I0iVbjUEGeftel12Gbq366g83QtA8E8r2Uz+6IHkJ5UWd79gUIX3JA3bLyJe2D QopBkuDzcKVtOEKFgcYrOrqaG1U79giGVajM4WVon/jAnyPxWl7lSRLHkf5bUKq7hgqI qYrGlejOsSSAggxfvj3GQmg9m+MadgXLM2jZmKxePlf5wMrQg7oXbwV1tsjyrjq2MRA6 FvMoXKvos5kv9fcrVWa87rlDj4y5EvS+eYCN+qe3HXSpqhRo9Kq1RukOmpGNRgkw1sAf vmRE9+AgJ5UoMmzOf5n3YI7T6NGPJw9OgUFVbWdHiFjEvWuFY/8WIPBGjuVN8OxaTsLr z+AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U8tD0ZWRxueOiZnXn8UX8nsv6z5g8QfT23T+iieg6fI=; b=BJ+SQtNEN5UjIc9rgb244s3oe0onVS8z6Xtl5hSDR+kOp3zlcBYp3PyiL3vBbnUlyT x2fKQ5oa40YDyDabt7DblSyQ3ZZV7mr4wNpLmgQ37u22BtSeLZf91NdXGGFBaC3mbysu GcR5WN83cx4oM+DoB5fC6z8D7ereq6rT8CaUIf6dmjTho0r8ctJalDOgwB/7KgyjCjhP /0CvH/7XteAa5Kybix8QNio49XTCLuEYi6rS0p9rShI+NE4fMjmfCYIMtvsKrV/YZtOL 6ZQNd6XTn25jHWzZsC5Etv8c2fLVxlcWB61SxNhqj4FedbslM7NHZDGNpH5d1YTq5XDW V71w== X-Gm-Message-State: AOAM533r3W17IPfbFga3suGzlL/p1EDzzi7R+qyt9MoMYqwXsf1DTO2Z Hc30O02NJCeUFed/SDoGxPVEgc9+Gw== X-Google-Smtp-Source: ABdhPJyZIExAAOBCez1kX40oKv+JGxZhb6d/02I7dL1PYFLCOK6saRJUulsso2zHWmJh0mrJepqqSzq7lA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:4ccb:: with SMTP id c11mr30162930wrt.331.1626710642525; Mon, 19 Jul 2021 09:04:02 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:38 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-8-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch, similar to the other registers that control traps. Use this value when setting cptr_el2 for the guest. Currently this value is unchanged (CPTR_EL2_DEFAULT), but future patches will set trapping bits based on features supported for the guest. No functional change intended. Signed-off-by: Fuad Tabba Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 76462c6a91ee..ac67d5699c68 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -290,6 +290,7 @@ struct kvm_vcpu_arch { /* Values of trap registers for the guest. */ u64 hcr_el2; u64 mdcr_el2; + u64 cptr_el2; /* Values of trap registers for the host before guest entry. */ u64 mdcr_el2_host; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e9a2b8f27792..14b12f2c08c0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1104,6 +1104,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); + vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT; /* * Handle the "start in power-off" case. diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 1778593a08a9..86f3d6482935 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) ___activate_traps(vcpu); __activate_traps_common(vcpu); - val = CPTR_EL2_DEFAULT; + val = vcpu->arch.cptr_el2; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; if (!update_fp_enabled(vcpu)) { val |= CPTR_EL2_TFP | CPTR_EL2_TZ; From patchwork Mon Jul 19 16:03:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D502C6377A for ; Mon, 19 Jul 2021 16:28:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 302B061924 for ; Mon, 19 Jul 2021 16:28:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344988AbhGSPrr (ORCPT ); Mon, 19 Jul 2021 11:47:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350264AbhGSPpq (ORCPT ); Mon, 19 Jul 2021 11:45:46 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74955C0ABCA6 for ; Mon, 19 Jul 2021 08:38:21 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d9-20020ac84e290000b0290256a44e9034so9600461qtw.22 for ; Mon, 19 Jul 2021 09:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=V9acI6EDUhR6tDfpqD9I6jrvJeuKAdMZUkQ+s858Nww=; b=BTpC1hwjiWVrUEkLbLdoxGFsQTsMQ7ua3qjmKq/cWq8pEdtqNoaW1vlJJr/8xIcdGW 51tQOla73fd6ozBrfWwYvU0fK7r4h2tePvxtQPbPru4f1iWcoMPyT//2OXb5xJXcrfqE 1SP8DZ1RGUOTDRVjV41OZd4ZWeitV/jtLi0aD5igfY9HwuF9Gqr3/G94dawfUHvXF+n4 +Ne/qeTtowBLUGLppz/g3dF6GlHFHI7UvavDhFMcsQe9k+B0+rgDuHt/y6XME9JECePm 9dPiB/I11y4V43mNctHB9+dR85b5cj49lv/wMFuuZcrYFhRc5D99vGZDsiqenb7SYaUf M+cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=V9acI6EDUhR6tDfpqD9I6jrvJeuKAdMZUkQ+s858Nww=; b=LknAriT3i1HCKuVI4cznzuPTVGh2YVdUz8isXLBadxKSMmwrCXz3salsDdWD8ho5kF mQrYLj2summQ7qTkBVjWiLLRUJDzqDO1CqtqCoEzSP8AfGdhG+ySaNR7SSVSUC+6J+DY jCrIDLMa3dGlW+I6uN183t5aK5ivoYfHl1lOQRbnxt+tl1TjsNGWaIvNPNTjOosqfJuL 1BQ/6WXAmkpZV3gqnoJ0586RTHKtthaajBPgM0SLr2iKyW0H5Pdp2vFRWZ8H2F9StEpW r1B9NGFqINpuUT8CsnMXpM2xiT9P3ZHjrsLcHAs6vUudFENBX+SqDAHbpV2nl3pK9K1m 2/Pw== X-Gm-Message-State: AOAM531T7S8L+KIAS9aEMywiNucXcGnq4vDj5IeijH0tCe8oW6LunIL/ x1rLC1GBijBxXY6D/82a4ApDfvt86A== X-Google-Smtp-Source: ABdhPJyyKsZdgx86oOpgbZLiHegl5Y1zod3kRSEavOWe7KLm5NM5K6J05rrWMdpFS4IHz+JdQ3I7DZHEMg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:a321:: with SMTP id u30mr25473018qvu.57.1626710644424; Mon, 19 Jul 2021 09:04:04 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:39 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-9-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add feature register flag definitions to clarify which features might be supported. Consolidate the various ID_AA64PFR0_ELx flags for all ELs. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/cpufeature.h | 4 ++-- arch/arm64/include/asm/sysreg.h | 12 ++++++++---- arch/arm64/kernel/cpufeature.c | 8 ++++---- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 9bb9d11750d7..b7d9bb17908d 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0) { u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT); - return val == ID_AA64PFR0_EL1_32BIT_64BIT; + return val == ID_AA64PFR0_ELx_32BIT_64BIT; } static inline bool id_aa64pfr0_32bit_el0(u64 pfr0) { u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT); - return val == ID_AA64PFR0_EL0_32BIT_64BIT; + return val == ID_AA64PFR0_ELx_32BIT_64BIT; } static inline bool id_aa64pfr0_sve(u64 pfr0) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 326f49e7bd42..0b773037251c 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -784,14 +784,13 @@ #define ID_AA64PFR0_AMU 0x1 #define ID_AA64PFR0_SVE 0x1 #define ID_AA64PFR0_RAS_V1 0x1 +#define ID_AA64PFR0_RAS_ANY 0xf #define ID_AA64PFR0_FP_NI 0xf #define ID_AA64PFR0_FP_SUPPORTED 0x0 #define ID_AA64PFR0_ASIMD_NI 0xf #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 -#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1 -#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2 -#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1 -#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2 +#define ID_AA64PFR0_ELx_64BIT_ONLY 0x1 +#define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 /* id_aa64pfr1 */ #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 @@ -847,12 +846,16 @@ #define ID_AA64MMFR0_ASID_SHIFT 4 #define ID_AA64MMFR0_PARANGE_SHIFT 0 +#define ID_AA64MMFR0_ASID_8 0x0 +#define ID_AA64MMFR0_ASID_16 0x2 + #define ID_AA64MMFR0_TGRAN4_NI 0xf #define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0 #define ID_AA64MMFR0_TGRAN64_NI 0xf #define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0 #define ID_AA64MMFR0_TGRAN16_NI 0x0 #define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1 +#define ID_AA64MMFR0_PARANGE_40 0x2 #define ID_AA64MMFR0_PARANGE_48 0x5 #define ID_AA64MMFR0_PARANGE_52 0x6 @@ -900,6 +903,7 @@ #define ID_AA64MMFR2_CNP_SHIFT 0 /* id_aa64dfr0 */ +#define ID_AA64DFR0_MTPMU_SHIFT 48 #define ID_AA64DFR0_TRBE_SHIFT 44 #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 0ead8bfedf20..5b59fe5e26e4 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -239,8 +239,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY), - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY), ARM64_FTR_END, }; @@ -1956,7 +1956,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64PFR0_EL0_SHIFT, - .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, + .min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT, }, #ifdef CONFIG_KVM { @@ -1967,7 +1967,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64PFR0_EL1_SHIFT, - .min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT, + .min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT, }, { .desc = "Protected KVM", From patchwork Mon Jul 19 16:03:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA32C07E95 for ; Mon, 19 Jul 2021 16:28:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDAF261627 for ; Mon, 19 Jul 2021 16:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241593AbhGSPrk (ORCPT ); Mon, 19 Jul 2021 11:47:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350267AbhGSPpq (ORCPT ); Mon, 19 Jul 2021 11:45:46 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91EC7C0ABCA7 for ; Mon, 19 Jul 2021 08:38:24 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id k3-20020a5d52430000b0290138092aea94so8966557wrc.20 for ; Mon, 19 Jul 2021 09:04:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vw+9k8TKpKtQTjxQwt2hnUGZ2NjHx/OJWiEJkCkz1N4=; b=sLPNtiMKmja20KaICWALYJ4ofuflIkIOE7nOQiX2tSdy3n44fa65SqsexcWWzl79/k Z7NBRtSnzz84wnQ6qLdJBL9GKE6UqQRfIh9ZD/Zoc8s71akMHsfvR8BYC9nCYfsXApFF MIS5H9GeVm2hzhNrZefd/v7Iq5w9JKQ+ZNYm981SW4oHqDsDNDynCD92CM8+0qbF0RdZ 6XxEFgfBgwlbMM19NhgJBIln16v5px7eZiRLqfqSDmJiQl05wtPSSWN7FF6W6wdd5xTW PJBOFw3SNSRE5rRfXYRnucerBoTwqihx1C3ukxGoX4Azbb2q1755m61BUT1R25v9ql7K 3rpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vw+9k8TKpKtQTjxQwt2hnUGZ2NjHx/OJWiEJkCkz1N4=; b=gTlgrkaQkv08qsY4WApew54feNdgj4+6spL09pwGGDAYJQ6jsqq/n6j1QUpQCuYc0T Z6AaWG6cHop/cc0c9FC4JHamRbYHKbQDZ4QZYoHyaY47cDjxfrpvXEuGtCOCS3/axulb y0bDVvP20se0z49LuUlPwOtoAG7S/HmnIunbkVA3LVAl4Xv+gYuaCLM5mp5PFRPmzPzS nc4tGHE0W5EcllZNSuMgynPNqcITSmVfLksYdBOU+4cW4p/Spfn+BMabbtGDEOdPQlXQ I2fBwVjadfLTvtrPMJvnxBWaPC0asQ7Zi79bM4IpS8uNJvrMXE5eMdFpXw+mD4NzdE/n dKjQ== X-Gm-Message-State: AOAM533RLbMwpSSWiuXKUSbw6AuR4H+sjONag1Yrsim7Nvvtbr+vgkUn 5RCi81SVAV21EcrdVHpH40iTm17AaA== X-Google-Smtp-Source: ABdhPJwqgGr1dc9ElvS+DtbtyF4398Jwb9HePJzbzjhsmwOOPQEhoV8D79HvGRx3Y9ug0QbreOeQr0qtmQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:adf:ea0f:: with SMTP id q15mr30547671wrm.145.1626710646765; Mon, 19 Jul 2021 09:04:06 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:40 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-10-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 09/15] KVM: arm64: Add config register bit definitions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add hardware configuration register bit definitions for HCR_EL2 and MDCR_EL2. Future patches toggle these hyp configuration register bits to trap on certain accesses. No functional change intended. Signed-off-by: Fuad Tabba Acked-by: Will Deacon --- arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index a928b2dc0b0f..327120c0089f 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -12,8 +12,13 @@ #include /* Hyp Configuration Register (HCR) bits */ + +#define HCR_TID5 (UL(1) << 58) +#define HCR_DCT (UL(1) << 57) #define HCR_ATA_SHIFT 56 #define HCR_ATA (UL(1) << HCR_ATA_SHIFT) +#define HCR_AMVOFFEN (UL(1) << 51) +#define HCR_FIEN (UL(1) << 47) #define HCR_FWB (UL(1) << 46) #define HCR_API (UL(1) << 41) #define HCR_APK (UL(1) << 40) @@ -56,6 +61,7 @@ #define HCR_PTW (UL(1) << 2) #define HCR_SWIO (UL(1) << 1) #define HCR_VM (UL(1) << 0) +#define HCR_RES0 ((UL(1) << 48) | (UL(1) << 39)) /* * The bits we set in HCR: @@ -277,11 +283,21 @@ #define CPTR_EL2_TZ (1 << 8) #define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ #define CPTR_EL2_DEFAULT CPTR_NVHE_EL2_RES1 +#define CPTR_NVHE_EL2_RES0 (GENMASK(63, 32) | \ + GENMASK(29, 21) | \ + GENMASK(19, 14) | \ + BIT(11)) /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) #define MDCR_EL2_E2TB_SHIFT (UL(24)) +#define MDCR_EL2_HPMFZS (UL(1) << 36) +#define MDCR_EL2_HPMFZO (UL(1) << 29) +#define MDCR_EL2_MTPME (UL(1) << 28) +#define MDCR_EL2_TDCC (UL(1) << 27) +#define MDCR_EL2_HCCD (UL(1) << 23) #define MDCR_EL2_TTRF (UL(1) << 19) +#define MDCR_EL2_HPMD (UL(1) << 17) #define MDCR_EL2_TPMS (UL(1) << 14) #define MDCR_EL2_E2PB_MASK (UL(0x3)) #define MDCR_EL2_E2PB_SHIFT (UL(12)) @@ -293,6 +309,12 @@ #define MDCR_EL2_TPM (UL(1) << 6) #define MDCR_EL2_TPMCR (UL(1) << 5) #define MDCR_EL2_HPMN_MASK (UL(0x1F)) +#define MDCR_EL2_RES0 (GENMASK(63, 37) | \ + GENMASK(35, 30) | \ + GENMASK(25, 24) | \ + GENMASK(22, 20) | \ + BIT(18) | \ + GENMASK(16, 15)) /* For compatibility with fault code shared with 32-bit */ #define FSC_FAULT ESR_ELx_FSC_FAULT From patchwork Mon Jul 19 16:03:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99DD2C07E95 for ; Mon, 19 Jul 2021 16:28:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84A4F616E8 for ; Mon, 19 Jul 2021 16:28:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347015AbhGSPrt (ORCPT ); Mon, 19 Jul 2021 11:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350271AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 301FCC0ABCAA for ; Mon, 19 Jul 2021 08:38:26 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id r13-20020a0cf60d0000b02902f3a4c41d77so15582205qvm.18 for ; Mon, 19 Jul 2021 09:04:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SMvDYa3zVL7ibMIO+tJttZvbwBrpgIyQmskq/9zvomc=; b=nGcldOEvZ5gWaVQK3V9NQw9Y7jzhGQff63UdVHhYuhqFTHhJpJgtfItHCpgXnUYWgF riofulYZNFspqyBcwCFLJYzHtqPp7NhHlSodygph/7KFIIvnll+TXaRp0kPqqEh/U0Cs hDFWRlxh0nov15s4XBnXuod8MiCXG4gUVh+ikbJO7n765CkOWYE+flUQJdauB4NeQ94H ihAhIjrc1bevXuKouvZqlPPKHhtbR0NSN5zY5CxKWS3eMe/lfvR0H93BDEfyvf4b4WLL G84YFwcjbBHDCk/WjsnL3veSJD7H1hP4jcoOsJGM1B81Rz0uuU4xj50iHAK7mizg8DuJ vM1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SMvDYa3zVL7ibMIO+tJttZvbwBrpgIyQmskq/9zvomc=; b=kxVhYDVZV2cWhUnbS/qsXtEe44kAM8NPJjkFA4ArmJDpgYjq0mlp/diY0TIdXc1ptj PatuFIqKvvx7Ljnu7Hihjp6b70IxDd3U3yzGlL1dlMk80nDoTVm6/dOqP7h4ZVuZJIMx XB7aVb37jWgW38tEbgfvFFpxpYidgS7gRlHQt8FeA7077Hi8MsOsz5b86eoKbHhex070 kZKU8ideQeThWKpTG8s5L21mhdOZkGP/hlOhN7CTIMuz2KlWdlhtl79aFaFYIik/IwAb vbScN2ajzH3CtYseIV+fZ7iLOYIXcFYUTND4v2fd6UA6OWfRKK4+k7WSPVbcxvIxCOGS U6fQ== X-Gm-Message-State: AOAM533xOT5PJWkZaHfRHXt6Fr2heze9UcbfdqJss6qZf39Zz6K4YbdR EG4+E6Yelb5Gcd8gTdnFYzZog4lEjw== X-Google-Smtp-Source: ABdhPJwjha9rcUJY9NIyV03u/us5GcujWpRscIOZkLe7Tu5+P5KuYptKASNahYciLnoOFUq0gk0t4prOVg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:4bae:: with SMTP id i14mr25407298qvw.24.1626710648907; Mon, 19 Jul 2021 09:04:08 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:41 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-11-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an array of pointers to handlers for various trap reasons in nVHE code. The current code selects how to fixup a guest on exit based on a series of if/else statements. Future patches will also require different handling for guest exists. Create an array of handlers to consolidate them. No functional change intended as the array isn't populated yet. Acked-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 35 ++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index a0e78a6027be..5a2b89b96c67 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) return true; } +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu); + +static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu) +{ + return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL; +} + +/* + * Allow the hypervisor to handle the exit with an exit handler if it has one. + * + * Returns true if the hypervisor handled the exit, and control should go back + * to the guest, or false if it hasn't. + */ +static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu) +{ + bool is_handled = false; + exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu); + + if (exit_handler) { + /* + * There's limited vcpu context here since it's not synced yet. + * Ensure that relevant vcpu context that might be used by the + * exit_handler is in sync before it's called and if handled. + */ + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); + *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); + + is_handled = exit_handler(vcpu); + + if (is_handled) { + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); + } + } + + return is_handled; +} + /* * Return true when we were able to fixup the guest exit and should return to * the guest, false when we should restore the host state and return to the @@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) goto guest; } + /* Check if there's an exit handler and allow it to handle the exit. */ + if (kvm_hyp_handle_exit(vcpu)) + goto guest; exit: /* Return to the host kernel and handle the exit */ return false; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 86f3d6482935..36da423006bd 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +typedef int (*exit_handle_fn)(struct kvm_vcpu *); + +static exit_handle_fn hyp_exit_handlers[] = { + [0 ... ESR_ELx_EC_MAX] = NULL, + [ESR_ELx_EC_WFx] = NULL, + [ESR_ELx_EC_CP15_32] = NULL, + [ESR_ELx_EC_CP15_64] = NULL, + [ESR_ELx_EC_CP14_MR] = NULL, + [ESR_ELx_EC_CP14_LS] = NULL, + [ESR_ELx_EC_CP14_64] = NULL, + [ESR_ELx_EC_HVC32] = NULL, + [ESR_ELx_EC_SMC32] = NULL, + [ESR_ELx_EC_HVC64] = NULL, + [ESR_ELx_EC_SMC64] = NULL, + [ESR_ELx_EC_SYS64] = NULL, + [ESR_ELx_EC_SVE] = NULL, + [ESR_ELx_EC_IABT_LOW] = NULL, + [ESR_ELx_EC_DABT_LOW] = NULL, + [ESR_ELx_EC_SOFTSTP_LOW] = NULL, + [ESR_ELx_EC_WATCHPT_LOW] = NULL, + [ESR_ELx_EC_BREAKPT_LOW] = NULL, + [ESR_ELx_EC_BKPT32] = NULL, + [ESR_ELx_EC_BRK64] = NULL, + [ESR_ELx_EC_FP_ASIMD] = NULL, + [ESR_ELx_EC_PAC] = NULL, +}; + +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu) +{ + u32 esr = kvm_vcpu_get_esr(vcpu); + u8 esr_ec = ESR_ELx_EC(esr); + + return hyp_exit_handlers[esr_ec]; +} + /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { From patchwork Mon Jul 19 16:03:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13D18C07E9B for ; Mon, 19 Jul 2021 16:28:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3A846186A for ; Mon, 19 Jul 2021 16:28:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241001AbhGSPsC (ORCPT ); Mon, 19 Jul 2021 11:48:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350270AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8756CC0ABCA8 for ; Mon, 19 Jul 2021 08:38:28 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id j16-20020ac855100000b029025bf786be09so9644199qtq.20 for ; Mon, 19 Jul 2021 09:04:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5cM7jtj85cjx0QSVLQLCmvdR8Pr0iuRIwLrrzJeca78=; b=DPJp8D/pnryqUzXhQjUCJZnGq9FRfoYUNPOX/JSopc5kReLfscURRnYMeN7Vyhmoqr aqQs4puMw9J/6nZirR0NxVjQWgwgX3SCSN1Ez7J3uP9dCg+8f+PolahbLKr5zScfOom9 ITrk+4V9pKgcGrucnRBQ3UBxDsi12FKRh73UJd5v5yJHZ0nihROLGkSrMVT+bVExJhN/ J2xg6aEd2uQO3b+l6DqI8RimffLelBfYHEyohU2av624TGiin3WIIuvTEGj8lBdHKMpW 8q1poXFXprKq1wXoGPRAA2XwxXUFEo2ErpHiETCnEbjG4h+1ly8HNgpX+Xjf/EpBmH3P pZTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5cM7jtj85cjx0QSVLQLCmvdR8Pr0iuRIwLrrzJeca78=; b=s5D3t1AlWx18hFb+aLP4LR9IDxXllIo87Y2dHzBNWAoBnt1RFrH1bT1hliQQS2Mpkz KW7BD54osVJnu4ByguWealioKeBa3oN0IYVrQaDqWh+sBNO56RNHwZd90GAsz5egjqBN Lht1s13Ke+VZwAVe1EYN8TeOKvNkjssOOVOrvkF9ZqMIZw4xDo/A2DxwtBZLTHyP5hv8 vXrGziGdzxslR1ULTRrpC98ymL3ZnQmq5/JyHdxGO3zleZEafUIQrfaQH/wLydxqMe0q rMY3YdoqXU9/BRTS/nt4VKaJmvEWCGyxDYdYYTXD8+CHIKS/qJi1HZi5tD6zJYihRw78 clhw== X-Gm-Message-State: AOAM532kLAP1mhukLgD7TmoTSM379XC/3W/XmxgQiTL0ERykcObqZ+/S VVqanHNRB5dFnK462emOTvK1B1qM6A== X-Google-Smtp-Source: ABdhPJxz92i3y8c3czqTbIVTtbEUXRxAaUT2p/IcLKS9WUGWkxVTIg0PaosUmaKkACdcuaG/byA4afFQZg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:ca5:: with SMTP id s5mr25400616qvs.58.1626710651062; Mon, 19 Jul 2021 09:04:11 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:42 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-12-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add trap handlers for protected VMs. These are mainly for Sys64 and debug traps. No functional change intended as these are not hooked in yet to the guest exit handlers introduced earlier. So even when trapping is triggered, the exit handlers would let the host handle it, as before. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++ arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/include/asm/kvm_hyp.h | 3 + arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/arm.c | 11 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 443 ++++++++++++++++++++++ arch/arm64/kvm/pkvm.c | 183 +++++++++ 8 files changed, 822 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c create mode 100644 arch/arm64/kvm/pkvm.c diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h new file mode 100644 index 000000000000..b39a5de2c4b9 --- /dev/null +++ b/arch/arm64/include/asm/kvm_fixed_config.h @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#ifndef __ARM64_KVM_FIXED_CONFIG_H__ +#define __ARM64_KVM_FIXED_CONFIG_H__ + +#include + +/* + * This file contains definitions for features to be allowed or restricted for + * guest virtual machines as a baseline, depending on what mode KVM is running + * in and on the type of guest is running. + * + * The features are represented as the highest allowed value for a feature in + * the feature id registers. If the field is set to all ones (i.e., 0b1111), + * then it's only restricted by what the system allows. If the feature is set to + * another value, then that value would be the maximum value allowed and + * supported in pKVM, even if the system supports a higher value. + * + * Some features are forced to a certain value, in which case a SET bitmap is + * used to force these values. + */ + + +/* + * Allowed features for protected guests (Protected KVM) + * + * The approach taken here is to allow features that are: + * - needed by common Linux distributions (e.g., flooating point) + * - are trivial, e.g., supporting the feature doesn't introduce or require the + * tracking of additional state + * - not trapable + */ + +/* + * - Floating-point and Advanced SIMD: + * Don't require much support other than maintaining the context, which KVM + * already has. + * - AArch64 guests only (no support for AArch32 guests): + * Simplify support in case of asymmetric AArch32 systems. + * - RAS (v1) + * v1 doesn't require much additional support, but later versions do. + * - Data Independent Timing + * Trivial + * Remaining features are not supported either because they require too much + * support from KVM, or risk leaking guest data. + */ +#define PVM_ID_AA64PFR0_ALLOW (\ + FEATURE(ID_AA64PFR0_FP) | \ + FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \ + FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \ + FEATURE(ID_AA64PFR0_ASIMD) | \ + FEATURE(ID_AA64PFR0_DIT) \ + ) + +/* + * - Branch Target Identification + * - Speculative Store Bypassing + * These features are trivial to support + */ +#define PVM_ID_AA64PFR1_ALLOW (\ + FEATURE(ID_AA64PFR1_BT) | \ + FEATURE(ID_AA64PFR1_SSBS) \ + ) + +/* + * No support for Scalable Vectors: + * Requires additional support from KVM + */ +#define PVM_ID_AA64ZFR0_ALLOW (0ULL) + +/* + * No support for debug, including breakpoints, and watchpoints: + * Reduce complexity and avoid exposing/leaking guest data + * + * NOTE: The Arm architecture mandates support for at least the Armv8 debug + * architecture, which would include at least 2 hardware breakpoints and + * watchpoints. Providing that support to protected guests adds considerable + * state and complexity, and risks leaking guest data. Therefore, the reserved + * value of 0 is used for debug-related fields. + */ +#define PVM_ID_AA64DFR0_ALLOW (0ULL) + +/* + * These features are chosen because they are supported by KVM and to limit the + * confiruation state space and make it more deterministic. + * - 40-bit IPA + * - 16-bit ASID + * - Mixed-endian + * - Distinction between Secure and Non-secure Memory + * - Mixed-endian at EL0 only + * - Non-context synchronizing exception entry and exit + */ +#define PVM_ID_AA64MMFR0_ALLOW (\ + FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \ + FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \ + FEATURE(ID_AA64MMFR0_BIGENDEL) | \ + FEATURE(ID_AA64MMFR0_SNSMEM) | \ + FEATURE(ID_AA64MMFR0_BIGENDEL0) | \ + FEATURE(ID_AA64MMFR0_EXS) \ + ) + +/* + * - 64KB granule not supported + */ +#define PVM_ID_AA64MMFR0_SET (\ + FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \ + ) + +/* + * These features are chosen because they are supported by KVM and to limit the + * confiruation state space and make it more deterministic. + * - Hardware translation table updates to Access flag and Dirty state + * - Number of VMID bits from CPU + * - Hierarchical Permission Disables + * - Privileged Access Never + * - SError interrupt exceptions from speculative reads + * - Enhanced Translation Synchronization + */ +#define PVM_ID_AA64MMFR1_ALLOW (\ + FEATURE(ID_AA64MMFR1_HADBS) | \ + FEATURE(ID_AA64MMFR1_VMIDBITS) | \ + FEATURE(ID_AA64MMFR1_HPD) | \ + FEATURE(ID_AA64MMFR1_PAN) | \ + FEATURE(ID_AA64MMFR1_SPECSEI) | \ + FEATURE(ID_AA64MMFR1_ETS) \ + ) + +/* + * These features are chosen because they are supported by KVM and to limit the + * confiruation state space and make it more deterministic. + * - Common not Private translations + * - User Access Override + * - IESB bit in the SCTLR_ELx registers + * - Unaligned single-copy atomicity and atomic functions + * - ESR_ELx.EC value on an exception by read access to feature ID space + * - TTL field in address operations. + * - Break-before-make sequences when changing translation block size + * - E0PDx mechanism + */ +#define PVM_ID_AA64MMFR2_ALLOW (\ + FEATURE(ID_AA64MMFR2_CNP) | \ + FEATURE(ID_AA64MMFR2_UAO) | \ + FEATURE(ID_AA64MMFR2_IESB) | \ + FEATURE(ID_AA64MMFR2_AT) | \ + FEATURE(ID_AA64MMFR2_IDS) | \ + FEATURE(ID_AA64MMFR2_TTL) | \ + FEATURE(ID_AA64MMFR2_BBM) | \ + FEATURE(ID_AA64MMFR2_E0PD) \ + ) + +/* + * Allow all features in this register because they are trivial to support, or + * are already supported by KVM: + * - LS64 + * - XS + * - I8MM + * - DGB + * - BF16 + * - SPECRES + * - SB + * - FRINTTS + * - PAuth + * - FPAC + * - LRCPC + * - FCMA + * - JSCVT + * - DPB + */ +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL) + +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ac67d5699c68..e1ceadd69575 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm) return false; } +void kvm_init_protected_traps(struct kvm_vcpu *vcpu); + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 657d0c94cf82..3f4866322f85 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus, void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); #endif +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val); +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 989bb5dad2c8..0be63f5c495f 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \ $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \ arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \ inject_fault.o va_layout.o handle_exit.o \ - guest.o debug.o reset.o sys_regs.o \ + guest.o debug.o pkvm.o reset.o sys_regs.o \ vgic-sys-reg-v3.o fpsimd.o pmu.o \ arch_timer.o trng.o\ vgic/vgic.o vgic/vgic-init.o \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 14b12f2c08c0..3f28549aff0d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) ret = kvm_arm_pmu_v3_enable(vcpu); + /* + * Initialize traps for protected VMs. + * NOTE: Move trap initialization to EL2 once the code is in place for + * maintaining protected VM state at EL2 instead of the host. + */ + if (kvm_vm_is_protected(kvm)) + kvm_init_protected_traps(vcpu); + return ret; } @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) void *addr = phys_to_virt(hyp_mem_base); int ret; + kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 5df6193fc430..a23f417a0c20 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o + cache.o setup.o mm.o mem_protect.o sys_regs.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c new file mode 100644 index 000000000000..6c7230aa70e9 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -0,0 +1,443 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#include + +#include +#include +#include +#include + +#include + +#include "../../sys_regs.h" + +/* + * Copies of the host's CPU features registers holding sanitized values. + */ +u64 id_aa64pfr0_el1_sys_val; +u64 id_aa64pfr1_el1_sys_val; +u64 id_aa64mmfr2_el1_sys_val; + +/* + * Inject an unknown/undefined exception to the guest. + */ +static void inject_undef(struct kvm_vcpu *vcpu) +{ + u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); + + vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + KVM_ARM64_EXCEPT_AA64_ELx_SYNC | + KVM_ARM64_PENDING_EXCEPTION); + + __kvm_adjust_pc(vcpu); + + write_sysreg_el1(esr, SYS_ESR); + write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); +} + +/* + * Accessor for undefined accesses. + */ +static bool undef_access(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + inject_undef(vcpu); + return false; +} + +/* + * Accessors for feature registers. + * + * If access is allowed, set the regval to the protected VM's view of the + * register and return true. + * Otherwise, inject an undefined exception and return false. + */ + +/* + * Returns the minimum feature supported and allowed. + */ +static u64 get_min_feature(u64 feature, u64 allowed_features, + u64 supported_features) +{ + const u64 allowed_feature = FIELD_GET(feature, allowed_features); + const u64 supported_feature = FIELD_GET(feature, supported_features); + + return min(allowed_feature, supported_feature); +} + +/* Accessor for ID_AA64PFR0_EL1. */ +static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm); + const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW; + u64 set_mask = 0; + u64 clear_mask = 0; + + if (p->is_write) + return undef_access(vcpu, p, r); + + /* Get the RAS version allowed and supported */ + clear_mask |= FEATURE(ID_AA64PFR0_RAS); + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), + get_min_feature(FEATURE(ID_AA64PFR0_RAS), + feature_ids, + id_aa64pfr0_el1_sys_val)); + + /* AArch32 guests: if not allowed then force guests to 64-bits only */ + clear_mask |= FEATURE(ID_AA64PFR0_EL0) | FEATURE(ID_AA64PFR0_EL1) | + FEATURE(ID_AA64PFR0_EL2) | FEATURE(ID_AA64PFR0_EL3); + + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), + get_min_feature(FEATURE(ID_AA64PFR0_EL0), + feature_ids, + id_aa64pfr0_el1_sys_val)); + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), + get_min_feature(FEATURE(ID_AA64PFR0_EL1), + feature_ids, + id_aa64pfr0_el1_sys_val)); + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), + get_min_feature(FEATURE(ID_AA64PFR0_EL2), + feature_ids, + id_aa64pfr0_el1_sys_val)); + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), + get_min_feature(FEATURE(ID_AA64PFR0_EL3), + feature_ids, + id_aa64pfr0_el1_sys_val)); + + /* Spectre and Meltdown mitigation */ + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2), + (u64)kvm->arch.pfr0_csv2); + set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3), + (u64)kvm->arch.pfr0_csv3); + + p->regval = (id_aa64pfr0_el1_sys_val & feature_ids & ~clear_mask) | + set_mask; + return true; +} + +/* Accessor for ID_AA64PFR1_EL1. */ +static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW; + + if (p->is_write) + return undef_access(vcpu, p, r); + + p->regval = id_aa64pfr1_el1_sys_val & feature_ids; + return true; +} + +/* Accessor for ID_AA64ZFR0_EL1. */ +static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for Scalable Vectors, therefore, pKVM has no sanitized + * copy of the feature id register. + */ + BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL); + + p->regval = 0; + return true; +} + +/* Accessor for ID_AA64DFR0_EL1. */ +static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for debug, including breakpoints, and watchpoints, + * therefore, pKVM has no sanitized copy of the feature id register. + */ + BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL); + + p->regval = 0; + return true; +} + +/* + * No restrictions on ID_AA64ISAR1_EL1 features, therefore, pKVM has no + * sanitized copy of the feature id register and it's handled by the host. + */ +static_assert(PVM_ID_AA64ISAR1_ALLOW == ~0ULL); + +/* Accessor for ID_AA64MMFR0_EL1. */ +static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW; + u64 set_mask = PVM_ID_AA64MMFR0_SET; + + if (p->is_write) + return undef_access(vcpu, p, r); + + p->regval = (id_aa64mmfr0_el1_sys_val & feature_ids) | set_mask; + return true; +} + +/* Accessor for ID_AA64MMFR1_EL1. */ +static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW; + + if (p->is_write) + return undef_access(vcpu, p, r); + + p->regval = id_aa64mmfr1_el1_sys_val & feature_ids; + return true; +} + +/* Accessor for ID_AA64MMFR2_EL1. */ +static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + const u64 feature_ids = PVM_ID_AA64MMFR2_ALLOW; + + if (p->is_write) + return undef_access(vcpu, p, r); + + p->regval = id_aa64mmfr2_el1_sys_val & feature_ids; + return true; +} + +/* + * Accessor for AArch32 Processor Feature Registers. + * + * The value of these registers is "unknown" according to the spec if AArch32 + * isn't supported. + */ +static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return undef_access(vcpu, p, r); + + /* + * No support for AArch32 guests, therefore, pKVM has no sanitized copy + * of AArch 32 feature id registers. + */ + BUILD_BUG_ON(FIELD_GET(FEATURE(ID_AA64PFR0_EL1), + PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_ELx_64BIT_ONLY); + + /* Use 0 for architecturally "unknown" values. */ + p->regval = 0; + return true; +} + +/* Mark the specified system register as an AArch32 feature register. */ +#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 } + +/* Mark the specified system register as not being handled in hyp. */ +#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL } + +/* + * Architected system registers. + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 + * + * NOTE: Anything not explicitly listed here will be *restricted by default*, + * i.e., it will lead to injecting an exception into the guest. + */ +static const struct sys_reg_desc pvm_sys_reg_descs[] = { + /* Cache maintenance by set/way operations are restricted. */ + + /* Debug and Trace Registers are all restricted */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AARCH32(SYS_ID_PFR0_EL1), + AARCH32(SYS_ID_PFR1_EL1), + AARCH32(SYS_ID_DFR0_EL1), + AARCH32(SYS_ID_AFR0_EL1), + AARCH32(SYS_ID_MMFR0_EL1), + AARCH32(SYS_ID_MMFR1_EL1), + AARCH32(SYS_ID_MMFR2_EL1), + AARCH32(SYS_ID_MMFR3_EL1), + + /* CRm=2 */ + AARCH32(SYS_ID_ISAR0_EL1), + AARCH32(SYS_ID_ISAR1_EL1), + AARCH32(SYS_ID_ISAR2_EL1), + AARCH32(SYS_ID_ISAR3_EL1), + AARCH32(SYS_ID_ISAR4_EL1), + AARCH32(SYS_ID_ISAR5_EL1), + AARCH32(SYS_ID_MMFR4_EL1), + AARCH32(SYS_ID_ISAR6_EL1), + + /* CRm=3 */ + AARCH32(SYS_MVFR0_EL1), + AARCH32(SYS_MVFR1_EL1), + AARCH32(SYS_MVFR2_EL1), + AARCH32(SYS_ID_PFR2_EL1), + AARCH32(SYS_ID_DFR1_EL1), + AARCH32(SYS_ID_MMFR5_EL1), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 }, + { SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 }, + { SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 }, + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 }, + HOST_HANDLED(SYS_ID_AA64DFR1_EL1), + HOST_HANDLED(SYS_ID_AA64AFR0_EL1), + HOST_HANDLED(SYS_ID_AA64AFR1_EL1), + HOST_HANDLED(SYS_ID_AA64ISAR0_EL1), + HOST_HANDLED(SYS_ID_AA64ISAR1_EL1), + { SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 }, + { SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 }, + { SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 }, + + HOST_HANDLED(SYS_SCTLR_EL1), + HOST_HANDLED(SYS_ACTLR_EL1), + HOST_HANDLED(SYS_CPACR_EL1), + + HOST_HANDLED(SYS_RGSR_EL1), + HOST_HANDLED(SYS_GCR_EL1), + + /* Scalable Vector Registers are restricted. */ + + HOST_HANDLED(SYS_TTBR0_EL1), + HOST_HANDLED(SYS_TTBR1_EL1), + HOST_HANDLED(SYS_TCR_EL1), + + HOST_HANDLED(SYS_APIAKEYLO_EL1), + HOST_HANDLED(SYS_APIAKEYHI_EL1), + HOST_HANDLED(SYS_APIBKEYLO_EL1), + HOST_HANDLED(SYS_APIBKEYHI_EL1), + HOST_HANDLED(SYS_APDAKEYLO_EL1), + HOST_HANDLED(SYS_APDAKEYHI_EL1), + HOST_HANDLED(SYS_APDBKEYLO_EL1), + HOST_HANDLED(SYS_APDBKEYHI_EL1), + HOST_HANDLED(SYS_APGAKEYLO_EL1), + HOST_HANDLED(SYS_APGAKEYHI_EL1), + + HOST_HANDLED(SYS_AFSR0_EL1), + HOST_HANDLED(SYS_AFSR1_EL1), + HOST_HANDLED(SYS_ESR_EL1), + + HOST_HANDLED(SYS_ERRIDR_EL1), + HOST_HANDLED(SYS_ERRSELR_EL1), + HOST_HANDLED(SYS_ERXFR_EL1), + HOST_HANDLED(SYS_ERXCTLR_EL1), + HOST_HANDLED(SYS_ERXSTATUS_EL1), + HOST_HANDLED(SYS_ERXADDR_EL1), + HOST_HANDLED(SYS_ERXMISC0_EL1), + HOST_HANDLED(SYS_ERXMISC1_EL1), + + HOST_HANDLED(SYS_TFSR_EL1), + HOST_HANDLED(SYS_TFSRE0_EL1), + + HOST_HANDLED(SYS_FAR_EL1), + HOST_HANDLED(SYS_PAR_EL1), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_MAIR_EL1), + HOST_HANDLED(SYS_AMAIR_EL1), + + /* Limited Ordering Regions Registers are restricted. */ + + HOST_HANDLED(SYS_VBAR_EL1), + HOST_HANDLED(SYS_DISR_EL1), + + /* GIC CPU Interface registers are restricted. */ + + HOST_HANDLED(SYS_CONTEXTIDR_EL1), + HOST_HANDLED(SYS_TPIDR_EL1), + + HOST_HANDLED(SYS_SCXTNUM_EL1), + + HOST_HANDLED(SYS_CNTKCTL_EL1), + + HOST_HANDLED(SYS_CCSIDR_EL1), + HOST_HANDLED(SYS_CLIDR_EL1), + HOST_HANDLED(SYS_CSSELR_EL1), + HOST_HANDLED(SYS_CTR_EL0), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_TPIDR_EL0), + HOST_HANDLED(SYS_TPIDRRO_EL0), + + HOST_HANDLED(SYS_SCXTNUM_EL0), + + /* Activity Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_CNTP_TVAL_EL0), + HOST_HANDLED(SYS_CNTP_CTL_EL0), + HOST_HANDLED(SYS_CNTP_CVAL_EL0), + + /* Performance Monitoring Registers are restricted. */ + + HOST_HANDLED(SYS_DACR32_EL2), + HOST_HANDLED(SYS_IFSR32_EL2), + HOST_HANDLED(SYS_FPEXC32_EL2), +}; + +/* + * Handler for protected VM MSR, MRS or System instruction execution in AArch64. + * + * Return 1 if handled, or 0 if not. + */ +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu) +{ + const struct sys_reg_desc *r; + struct sys_reg_params params; + unsigned long esr = kvm_vcpu_get_esr(vcpu); + int Rt = kvm_vcpu_sys_get_rt(vcpu); + + params = esr_sys64_to_params(esr); + params.regval = vcpu_get_reg(vcpu, Rt); + + r = find_reg(¶ms, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs)); + + /* Undefined access (RESTRICTED). */ + if (r == NULL) { + inject_undef(vcpu); + return 1; + } + + /* Handled by the host (HOST_HANDLED) */ + if (r->access == NULL) + return 0; + + /* Handled by hyp: skip instruction if instructed to do so. */ + if (r->access(vcpu, ¶ms, r)) + __kvm_skip_instr(vcpu); + + vcpu_set_reg(vcpu, Rt, params.regval); + return 1; +} + +/* + * Handler for protected VM restricted exceptions. + * + * Inject an undefined exception into the guest and return 1 to indicate that + * it was handled. + */ +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu) +{ + inject_undef(vcpu); + return 1; +} diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c new file mode 100644 index 000000000000..b8430b3d97af --- /dev/null +++ b/arch/arm64/kvm/pkvm.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KVM host (EL1) interface to Protected KVM (pkvm) code at EL2. + * + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#include +#include + +#include + +/* + * Set trap register values for features not allowed in ID_AA64PFR0. + */ +static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) +{ + const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW; + u64 hcr_set = 0; + u64 hcr_clear = 0; + u64 cptr_set = 0; + + /* Trap AArch32 guests */ + if (FIELD_GET(FEATURE(ID_AA64PFR0_EL0), feature_ids) < + ID_AA64PFR0_ELx_32BIT_64BIT || + FIELD_GET(FEATURE(ID_AA64PFR0_EL1), feature_ids) < + ID_AA64PFR0_ELx_32BIT_64BIT) + hcr_set |= HCR_RW | HCR_TID0; + + /* Trap RAS unless all versions are supported */ + if (FIELD_GET(FEATURE(ID_AA64PFR0_RAS), feature_ids) < + ID_AA64PFR0_RAS_ANY) { + hcr_set |= HCR_TERR | HCR_TEA; + hcr_clear |= HCR_FIEN; + } + + /* Trap AMU */ + if (!FIELD_GET(FEATURE(ID_AA64PFR0_AMU), feature_ids)) { + hcr_clear |= HCR_AMVOFFEN; + cptr_set |= CPTR_EL2_TAM; + } + + /* Trap ASIMD */ + if (!FIELD_GET(FEATURE(ID_AA64PFR0_ASIMD), feature_ids)) + cptr_set |= CPTR_EL2_TFP; + + /* Trap SVE */ + if (!FIELD_GET(FEATURE(ID_AA64PFR0_SVE), feature_ids)) + cptr_set |= CPTR_EL2_TZ; + + vcpu->arch.hcr_el2 |= hcr_set; + vcpu->arch.hcr_el2 &= ~hcr_clear; + vcpu->arch.cptr_el2 |= cptr_set; +} + +/* + * Set trap register values for features not allowed in ID_AA64PFR1. + */ +static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu) +{ + const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW; + u64 hcr_set = 0; + u64 hcr_clear = 0; + + /* Memory Tagging: Trap and Treat as Untagged if not allowed. */ + if (!FIELD_GET(FEATURE(ID_AA64PFR1_MTE), feature_ids)) { + hcr_set |= HCR_TID5; + hcr_clear |= HCR_DCT | HCR_ATA; + } + + vcpu->arch.hcr_el2 |= hcr_set; + vcpu->arch.hcr_el2 &= ~hcr_clear; +} + +/* + * Set trap register values for features not allowed in ID_AA64DFR0. + */ +static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) +{ + const u64 feature_ids = PVM_ID_AA64DFR0_ALLOW; + u64 mdcr_set = 0; + u64 mdcr_clear = 0; + u64 cptr_set = 0; + + /* Trap/constrain PMU */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), feature_ids)) { + mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; + mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME | + MDCR_EL2_HPMN_MASK; + } + + /* Trap Debug */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), feature_ids)) + mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE; + + /* Trap OS Double Lock */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_DOUBLELOCK), feature_ids)) + mdcr_set |= MDCR_EL2_TDOSA; + + /* Trap SPE */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMSVER), feature_ids)) { + mdcr_set |= MDCR_EL2_TPMS; + mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + } + + /* Trap Trace Filter */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACE_FILT), feature_ids)) + mdcr_set |= MDCR_EL2_TTRF; + + /* Trap Trace */ + if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACEVER), feature_ids)) + cptr_set |= CPTR_EL2_TTA; + + vcpu->arch.mdcr_el2 |= mdcr_set; + vcpu->arch.mdcr_el2 &= ~mdcr_clear; + vcpu->arch.cptr_el2 |= cptr_set; +} + +/* + * Set trap register values for features not allowed in ID_AA64MMFR0. + */ +static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu) +{ + const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW; + u64 mdcr_set = 0; + + /* Trap Debug Communications Channel registers */ + if (!FIELD_GET(FEATURE(ID_AA64MMFR0_FGT), feature_ids)) + mdcr_set |= MDCR_EL2_TDCC; + + vcpu->arch.mdcr_el2 |= mdcr_set; +} + +/* + * Set trap register values for features not allowed in ID_AA64MMFR1. + */ +static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu) +{ + const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW; + u64 hcr_set = 0; + + /* Trap LOR */ + if (!FIELD_GET(FEATURE(ID_AA64MMFR1_LOR), feature_ids)) + hcr_set |= HCR_TLOR; + + vcpu->arch.hcr_el2 |= hcr_set; +} + +/* + * Set baseline trap register values. + */ +static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) +{ + const u64 hcr_trap_feat_regs = HCR_TID3; + const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1; + + /* + * Always trap: + * - Feature id registers: to control features exposed to guests + * - Implementation-defined features + */ + vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef; + + /* Clear res0 and set res1 bits to trap potential new features. */ + vcpu->arch.hcr_el2 &= ~(HCR_RES0); + vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); + vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; + vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0); +} + +/* + * Initialize trap register values for protected VMs. + */ +void kvm_init_protected_traps(struct kvm_vcpu *vcpu) +{ + pvm_init_trap_regs(vcpu); + pvm_init_traps_aa64pfr0(vcpu); + pvm_init_traps_aa64pfr1(vcpu); + pvm_init_traps_aa64dfr0(vcpu); + pvm_init_traps_aa64mmfr0(vcpu); + pvm_init_traps_aa64mmfr1(vcpu); +} From patchwork Mon Jul 19 16:03:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2A0EC12002 for ; Mon, 19 Jul 2021 16:28:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B89B861879 for ; Mon, 19 Jul 2021 16:28:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347393AbhGSPry (ORCPT ); Mon, 19 Jul 2021 11:47:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350269AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAA37C0ABCAC for ; Mon, 19 Jul 2021 08:38:30 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id z1-20020a0cfec10000b02902dbb4e0a8f2so939065qvs.6 for ; Mon, 19 Jul 2021 09:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AJdpXDjb5aqrnyWXTohbslhSMhG4QVJMKzdIEyMUQpc=; b=Lwsz6htTsTWfi7NrN+36zuCZsSkjE9PZhnpdh3+672pcVHhLdTHphSHEHVUhe8wDhl 7J3Xk8TAOfZ5iSvTkWeRqsk9kbNKyezC11ubg0T7vKXHeMoZypUm5MD10dFoNf+LFn8Y HIrDaF8b1EWtG514FOl8z3rUi6HjMb7rDasSWjFGXbngMsi0eE1/jk0qbpTTwN9kJt7b kL4fP0N0g5o1OFBXV/EIZcTpOOIx0AVoqX4L7bZgQKOD5s6FHWKY2+GdtJP4wux4AZTT tQ+qmqYKXZbH7mqZ8qYQIxpp5zGgL685HlOF9JSq1WN/DUKZBSPZ7ycql8Q3c9gEPou0 rjoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AJdpXDjb5aqrnyWXTohbslhSMhG4QVJMKzdIEyMUQpc=; b=THkYWtdBDPlt+xKtx/iiK980fkxqIleK4krd8NKED1g7BMQWjFAZz40d/DBE2IS1FN lgSc2axvY30A8sgDTCm7B2cv7Bb1YpNcMTiGWBa4dx5TpxObq/Q3z2zTqFz/FuWYhH8v AiKtiWBA1AyUiXXa/70mV+PyrfYc8TEQFqH+JB4UIKxwCpEshuwjw4e/3Z9kyoB9xxxm LvZQINLR11spCd4e+U7SkSDEwVdfpBjTOsqbF0vrgXJ56kiuTu+vIY+ynHNkKER8xHrP mX6tRbJ6tOF+QHweU2AvQqqxbd1JGrb1/z9p+hXOhkHbk4/IF9dNEZ4FAoieolvjaoBa xHEw== X-Gm-Message-State: AOAM533Dp2wYFXgqasvfd415Fs+Ffhkw/xtGhT0KTo6B35/lL1sbrioG uxY+FrHwBfsvDcqsIav/yjsoBsN1ig== X-Google-Smtp-Source: ABdhPJw3tryWXJT3EFvXJKKt9agcIqsoHY0evVKNv/qv6pjB5RoCSQQ/BO2TYjmoqow8vwgbItMGRJsLYA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:529:: with SMTP id x9mr15850705qvw.0.1626710653088; Mon, 19 Jul 2021 09:04:13 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:43 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-13-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the sanitized copies of the CPU feature registers to the recently created sys_regs.c. This consolidates all copies in a more relevant file. No functional change intended. Signed-off-by: Fuad Tabba Acked-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 2 ++ 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d938ce95d3bd..925c7db7fa34 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -25,12 +25,6 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; -/* - * Copies of the host's CPU features registers holding sanitized values. - */ -u64 id_aa64mmfr0_el1_sys_val; -u64 id_aa64mmfr1_el1_sys_val; - static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 6c7230aa70e9..e928567430c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -20,6 +20,8 @@ */ u64 id_aa64pfr0_el1_sys_val; u64 id_aa64pfr1_el1_sys_val; +u64 id_aa64mmfr0_el1_sys_val; +u64 id_aa64mmfr1_el1_sys_val; u64 id_aa64mmfr2_el1_sys_val; /* From patchwork Mon Jul 19 16:03:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2CDBC636C9 for ; Mon, 19 Jul 2021 16:28:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDEF361883 for ; Mon, 19 Jul 2021 16:28:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347413AbhGSPr7 (ORCPT ); Mon, 19 Jul 2021 11:47:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350272AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EE50C0ABCA9 for ; Mon, 19 Jul 2021 08:38:33 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id h11-20020adffa8b0000b029013a357d7bdcso8961578wrr.18 for ; Mon, 19 Jul 2021 09:04:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5MFwRBbHcTOjBULosBXusPd6HIHHvkQ/SJI8RKQKjoI=; b=jwecYIwt/LJzOwWub3fJ11KnBZkZbOZDCJYIxrSMsT9InVRCytbw4EKQI9V+QIHP4Y 17TLshiYMJRXAhpOLIJxmX+HXYM/QINJMZmzSo2cc6Iuh0ZjSPoz2EfTOyAkL3fcgnj+ WQqvTGpmZyJzYwlVEySrBWCeKuDA+A8tnzFq2mMjJlSlaOgZtDrgKwSmUaO7LgXVU//J VtrqquOz4E5hH4gOzDBavTW15yFsL1lnlNwwhakU7WnVbKt78PIljzwWsWXLPEZXfbD1 7gyydXK95ogJ5GADzekynsAKOe1Mp15Ygz5kJr4k6Cs7BmWy5HtSOEzhf6APiH4Gendj xtYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5MFwRBbHcTOjBULosBXusPd6HIHHvkQ/SJI8RKQKjoI=; b=rHvs0qviepAO3wo7WZIvNTxbFFtrbUf+EdnSiDisGiXSbiBvan1qFBdD8HjG54AjGy GCNdyfCmN54J3O+Wh0GKYBg7xoDW1x5/sUNJwScJMyJmfopeZgzFRxXplT5Alk+eOZ+h muvTjIK9GN596EWRerPQVg9A0q/FsUwootdB5MKfnaFZZRV1wPy1/cESVgH97KMIqmaa Cshg2x30Jb36F/h/oh5YTd9Kxq6B2MseNMU8KpFlkwdrpC4FXoMYNuj/dj1/sJ2q80mY bBX831BiEUJubjPBRJzQF60ZY3luDkTtnRJsdThK7fCzpVupTJUbhSDnQ/LPrsOKByA9 Bcrw== X-Gm-Message-State: AOAM532qCm9fEKuW5eczoudjVUP/y2DbilvmRnWqm5nvsiq54+qnvYYo tdU2xfuXBxiJ+26yQe00Jxm1BqoAww== X-Google-Smtp-Source: ABdhPJycnuIWqvfS+c4stsIyFryz3mu2TLJoWq0Tp++RCd92HF5QNKLg7ceI9V02hPZ2AZSXUAk2uAkovA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:4108:: with SMTP id j8mr32390600wmi.67.1626710655090; Mon, 19 Jul 2021 09:04:15 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:44 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-14-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Trap accesses to restricted features for VMs running in protected mode. Access to feature registers are emulated, and only supported features are exposed to protected VMs. Accesses to restricted registers as well as restricted instructions are trapped, and an undefined exception is injected into the protected guests, i.e., with EC = 0x0 (unknown reason). This EC is the one used, according to the Arm Architecture Reference Manual, for unallocated or undefined system registers or instructions. Only affects the functionality of protected VMs. Otherwise, should not affect non-protected VMs when KVM is running in protected mode. Signed-off-by: Fuad Tabba Acked-by: Will Deacon --- arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 52 ++++++++++++++++++------- 2 files changed, 41 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 5a2b89b96c67..8431f1514280 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -33,6 +33,9 @@ extern struct exception_table_entry __start___kvm_ex_table; extern struct exception_table_entry __stop___kvm_ex_table; +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu); +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu); + /* Check whether the FP regs were dirtied while in the host-side run loop: */ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 36da423006bd..99bbbba90094 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +/** + * Handle system register accesses for protected VMs. + * + * Return 1 if handled, or 0 if not. + */ +static int handle_pvm_sys64(struct kvm_vcpu *vcpu) +{ + return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ? + kvm_handle_pvm_sys64(vcpu) : + 0; +} + +/** + * Handle restricted feature accesses for protected VMs. + * + * Return 1 if handled, or 0 if not. + */ +static int handle_pvm_restricted(struct kvm_vcpu *vcpu) +{ + return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ? + kvm_handle_pvm_restricted(vcpu) : + 0; +} + typedef int (*exit_handle_fn)(struct kvm_vcpu *); static exit_handle_fn hyp_exit_handlers[] = { - [0 ... ESR_ELx_EC_MAX] = NULL, + [0 ... ESR_ELx_EC_MAX] = handle_pvm_restricted, [ESR_ELx_EC_WFx] = NULL, - [ESR_ELx_EC_CP15_32] = NULL, - [ESR_ELx_EC_CP15_64] = NULL, - [ESR_ELx_EC_CP14_MR] = NULL, - [ESR_ELx_EC_CP14_LS] = NULL, - [ESR_ELx_EC_CP14_64] = NULL, + [ESR_ELx_EC_CP15_32] = handle_pvm_restricted, + [ESR_ELx_EC_CP15_64] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_MR] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_LS] = handle_pvm_restricted, + [ESR_ELx_EC_CP14_64] = handle_pvm_restricted, [ESR_ELx_EC_HVC32] = NULL, [ESR_ELx_EC_SMC32] = NULL, [ESR_ELx_EC_HVC64] = NULL, [ESR_ELx_EC_SMC64] = NULL, - [ESR_ELx_EC_SYS64] = NULL, - [ESR_ELx_EC_SVE] = NULL, + [ESR_ELx_EC_SYS64] = handle_pvm_sys64, + [ESR_ELx_EC_SVE] = handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] = NULL, [ESR_ELx_EC_DABT_LOW] = NULL, - [ESR_ELx_EC_SOFTSTP_LOW] = NULL, - [ESR_ELx_EC_WATCHPT_LOW] = NULL, - [ESR_ELx_EC_BREAKPT_LOW] = NULL, - [ESR_ELx_EC_BKPT32] = NULL, - [ESR_ELx_EC_BRK64] = NULL, - [ESR_ELx_EC_FP_ASIMD] = NULL, + [ESR_ELx_EC_SOFTSTP_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_WATCHPT_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_BREAKPT_LOW] = handle_pvm_restricted, + [ESR_ELx_EC_BKPT32] = handle_pvm_restricted, + [ESR_ELx_EC_BRK64] = handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] = handle_pvm_restricted, [ESR_ELx_EC_PAC] = NULL, }; From patchwork Mon Jul 19 16:03:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56D7FC07E9D for ; Mon, 19 Jul 2021 16:28:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42A7661930 for ; Mon, 19 Jul 2021 16:28:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343993AbhGSPsN (ORCPT ); Mon, 19 Jul 2021 11:48:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350273AbhGSPpr (ORCPT ); Mon, 19 Jul 2021 11:45:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5118C0ABCAB for ; Mon, 19 Jul 2021 08:38:34 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id h1-20020a255f410000b02905585436b530so25891058ybm.21 for ; Mon, 19 Jul 2021 09:04:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XfYC7GtwCzce6ZhnFCXG3Bv02gpIIg+CxJPXUjvg7WQ=; b=bwTUJXGbHV60w2p6YQDBqFUvMzTRBOK+o/sNpUyXJPuo36SWYZtj2M/HxMZnK0kpa7 af+PYMfDDtHVfTR7GZ7Zfj222Uws1N4VH9Tsbs+tcEr0ZNXC25MNmf1vwIdmPnZ7xUhY SlhnRt11MfFxPGlpenEuUxGJGZCJmyrZKvI4atxKugAQmrGPak5VHcCypKFwVvGtPb0Y R1N1rD+zSc2NRQprUU89+vNC3BEYv2REpidOs7XCBq6YvcBqC78huig7Dser2AX4eMlA 1lLiZoEjV+GtacstdLDjpwQ5ID+mzhLUMXrnY6L2K/EmYewOXZQCwUWUCVsdhnxgCagG P6/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XfYC7GtwCzce6ZhnFCXG3Bv02gpIIg+CxJPXUjvg7WQ=; b=JijJU4xLZJo1sKsZDoNdoETudSe8HOdT4hH5fH7G8qdZtTfU5vvdGPLKFczJu0xo7Z KJwhdjINwlyLfgXMUxDkhL6WJmgkBAy7zOY6u9N9AVmfVTVn5Ud0X7eecVoPSc1ipd7/ YY5CGVjmdLbBe33KaSGZ1BlERc2I2CuKikJkEv3DIErGd8P4vOVwLtLgzfBe905z7CPq 7kZezUdAqTJV16NZuizjhYCH488wbj2qsJk12ikPISg+aBZsEDyd6WGOwQlTsqvepESb fN7lvM76xFBQWjv2bqe7mgNcafoUTb9sPL7qtwnT8xx704RQrBLpPuj4SIfzY6ULnOcY W4bg== X-Gm-Message-State: AOAM530X/bTXLTVv0HH7a5JDvqMCMksk5qVUreb8zXZkm2VSXwPEru4o GUQa91ISovnUiEy7GlkpgeKa2Kxptg== X-Google-Smtp-Source: ABdhPJz+V2mbnO0SPesWcZxOnZ7G22wvyRqCMG269fKYfL4HphjBW2ENB0OeZ0xD5198USCahPcX6YInPg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a25:d88a:: with SMTP id p132mr34391631ybg.409.1626710657055; Mon, 19 Jul 2021 09:04:17 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:45 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-15-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Protected KVM does not support protected AArch32 guests. However, it is possible for the guest to force run AArch32, potentially causing problems. Add an extra check so that if the hypervisor catches the guest doing that, it can prevent the guest from running again by resetting vcpu->arch.target and returning ARM_EXCEPTION_IL. Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric AArch32 systems") Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 8431f1514280..f09343e15a80 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR); } + /* + * Protected VMs might not be allowed to run in AArch32. The check below + * is based on the one in kvm_arch_vcpu_ioctl_run(). + * The ARMv8 architecture doesn't give the hypervisor a mechanism to + * prevent a guest from dropping to AArch32 EL0 if implemented by the + * CPU. If the hypervisor spots a guest in such a state ensure it is + * handled, and don't trust the host to spot or fix it. + */ + if (unlikely(is_nvhe_hyp_code() && + kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) && + FIELD_GET(FEATURE(ID_AA64PFR0_EL0), + PVM_ID_AA64PFR0_ALLOW) < + ID_AA64PFR0_ELx_32BIT_64BIT && + vcpu_mode_is_32bit(vcpu))) { + /* + * As we have caught the guest red-handed, decide that it isn't + * fit for purpose anymore by making the vcpu invalid. + */ + vcpu->arch.target = -1; + *exit_code = ARM_EXCEPTION_IL; + goto exit; + } + /* * We're using the raw exception code in order to only process * the trap if no SError is pending. We will come back to the From patchwork Mon Jul 19 16:03:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12386265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BFDDC07E95 for ; Mon, 19 Jul 2021 16:29:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 105776192E for ; Mon, 19 Jul 2021 16:29:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346813AbhGSPsS (ORCPT ); Mon, 19 Jul 2021 11:48:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235176AbhGSPps (ORCPT ); Mon, 19 Jul 2021 11:45:48 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E25C6C0ABCAD for ; Mon, 19 Jul 2021 08:38:37 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id h15-20020adffd4f0000b0290137e68ed637so8909846wrs.22 for ; Mon, 19 Jul 2021 09:04:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wr4M5V9fqDifu4W3yqELoDlyD/QsVKGAaDZe0uH90bk=; b=h91JnRYTdlcPoijWgxxAfjtdJIrm6H6BOcmFAhwd9pay0PJJ4cC6n1M9vbvnZ6znbT nS/1gN8idExqyOnrE/Bmz/LLcTPnhqFfeS+t1mMiO1wdPBzQPExNqThRPzw01XZUA+H7 gdwMr0REM1mgF7ETdQ3FRIuAdUu2tv277ykNrgB5DnSPRjpGP76y9D0y5p/veJSbXPaC 4CPTxA7GU2+sLTqJORcgVZ7s3n6pyJ/1grd2vbUGVhIssCG9VpF9EJsTsBHAfdzO6UJm 4rntEOyEpDDnjgVG6RdAynoK+s7ZurB1veuHFd8Ap9cmHCr6tOSp+897JO54o+IABZLh QPwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wr4M5V9fqDifu4W3yqELoDlyD/QsVKGAaDZe0uH90bk=; b=FvZAsUJ/R5D3Tvc5TgBBMBOj/CN7xqYqQ3foySteb9F9IO5/GwlJj/zF6vO8v6n3Di yDokCA/juysfGnlSvkzh/Yp/ghHwafijWmvYCADllM+jvAayf930lKqqBSizFd+mJN13 Ui8NWlfxFQxjJH5iufcCjsR6xIxzpxvtnn04l2Xteqlzw5D7nhiVgx+Itrl5+bSdgDlF UEMU52RYeIMb0nKc0QpB4jRH3kACr51pxbAQdY2WtRGvgu3XQyUhQCzZYn+ZiDj5ZfEK 31kpcGBhLbJAh4ZmdaiTrLxzHgHS8sdvRYIUTdrpgPkynJeuoiN10hJWhWc63Wkkgynu laGQ== X-Gm-Message-State: AOAM533YDrz1cUDQguXApakVq8tE9n3MUjj08Cw0YJ7RiiDLPwpozMi/ cRCfI+aEu/mKOoNhmuesG2JZWsj2ag== X-Google-Smtp-Source: ABdhPJzzYFHkSS+rg5cJ0eIx8EjkgLZ9WyOnX41eM+jzRisgCK/ewgy3OonzA0LHza8N9WIoUOHe3/73yw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:530c:: with SMTP id e12mr29810469wrv.130.1626710659125; Mon, 19 Jul 2021 09:04:19 -0700 (PDT) Date: Mon, 19 Jul 2021 17:03:46 +0100 In-Reply-To: <20210719160346.609914-1-tabba@google.com> Message-Id: <20210719160346.609914-16-tabba@google.com> Mime-Version: 1.0 References: <20210719160346.609914-1-tabba@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Restrict protected VM capabilities based on the fixed-configuration for protected VMs. No functional change intended in current KVM-supported modes (nVHE, VHE). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++ arch/arm64/kvm/arm.c | 63 ++++++++++++++++++++++- arch/arm64/kvm/pkvm.c | 30 +++++++++++ 3 files changed, 102 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h index b39a5de2c4b9..14310b035bf7 100644 --- a/arch/arm64/include/asm/kvm_fixed_config.h +++ b/arch/arm64/include/asm/kvm_fixed_config.h @@ -175,4 +175,14 @@ */ #define PVM_ID_AA64ISAR1_ALLOW (~0ULL) +/* + * Returns the maximum number of breakpoints supported for protected VMs. + */ +int kvm_arm_pkvm_get_max_brps(void); + +/* + * Returns the maximum number of watchpoints supported for protected VMs. + */ +int kvm_arm_pkvm_get_max_wrps(void); + #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3f28549aff0d..bc41e3b71fab 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -188,9 +189,10 @@ void kvm_arch_destroy_vm(struct kvm *kvm) atomic_set(&kvm->online_vcpus, 0); } -int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +static int kvm_check_extension(struct kvm *kvm, long ext) { int r; + switch (ext) { case KVM_CAP_IRQCHIP: r = vgic_present; @@ -281,6 +283,65 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) return r; } +static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap) +{ + int r; + + switch (ext) { + case KVM_CAP_ARM_PSCI: + case KVM_CAP_ARM_PSCI_0_2: + case KVM_CAP_NR_VCPUS: + case KVM_CAP_MAX_VCPUS: + case KVM_CAP_MAX_VCPU_ID: + r = kvm_cap; + break; + case KVM_CAP_ARM_EL1_32BIT: + r = kvm_cap && + (FIELD_GET(FEATURE(ID_AA64PFR0_EL1), PVM_ID_AA64PFR0_ALLOW) >= + ID_AA64PFR0_ELx_32BIT_64BIT); + break; + case KVM_CAP_GUEST_DEBUG_HW_BPS: + r = min(kvm_cap, kvm_arm_pkvm_get_max_brps()); + break; + case KVM_CAP_GUEST_DEBUG_HW_WPS: + r = min(kvm_cap, kvm_arm_pkvm_get_max_wrps()); + break; + case KVM_CAP_ARM_PMU_V3: + r = kvm_cap && + FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), PVM_ID_AA64DFR0_ALLOW); + break; + case KVM_CAP_ARM_SVE: + r = kvm_cap && + FIELD_GET(FEATURE(ID_AA64PFR0_SVE), PVM_ID_AA64PFR0_ALLOW); + break; + case KVM_CAP_ARM_PTRAUTH_ADDRESS: + r = kvm_cap && + FIELD_GET(FEATURE(ID_AA64ISAR1_API), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(FEATURE(ID_AA64ISAR1_APA), PVM_ID_AA64ISAR1_ALLOW); + break; + case KVM_CAP_ARM_PTRAUTH_GENERIC: + r = kvm_cap && + FIELD_GET(FEATURE(ID_AA64ISAR1_GPI), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(FEATURE(ID_AA64ISAR1_GPA), PVM_ID_AA64ISAR1_ALLOW); + break; + default: + r = 0; + break; + } + + return r; +} + +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r = kvm_check_extension(kvm, ext); + + if (unlikely(kvm && kvm_vm_is_protected(kvm))) + r = pkvm_check_extension(kvm, ext, r); + + return r; +} + long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b8430b3d97af..d41553594d08 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -181,3 +181,33 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu) pvm_init_traps_aa64mmfr0(vcpu); pvm_init_traps_aa64mmfr1(vcpu); } + +int kvm_arm_pkvm_get_max_brps(void) +{ + int num = FIELD_GET(FEATURE(ID_AA64DFR0_BRPS), PVM_ID_AA64DFR0_ALLOW); + + /* + * If breakpoints are supported, the maximum number is 1 + the field. + * Otherwise, return 0, which is not compliant with the architecture, + * but is reserved and is used here to indicate no debug support. + */ + if (num) + return 1 + num; + else + return 0; +} + +int kvm_arm_pkvm_get_max_wrps(void) +{ + int num = FIELD_GET(FEATURE(ID_AA64DFR0_WRPS), PVM_ID_AA64DFR0_ALLOW); + + /* + * If breakpoints are supported, the maximum number is 1 + the field. + * Otherwise, return 0, which is not compliant with the architecture, + * but is reserved and is used here to indicate no debug support. + */ + if (num) + return 1 + num; + else + return 0; +}