From patchwork Fri Sep 24 12:53:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03AD0C433EF for ; Fri, 24 Sep 2021 12:56:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C542B61501 for ; Fri, 24 Sep 2021 12:56:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C542B61501 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Z+o2l9RQX78qyzeAjk7RGfRHLQr60utBOmIFPlvK/w4=; b=x/C+oSIj8e+yZiuvp1UQMt+EpM Xph0InskYMj8JQ5KZI9Yqz/UEX02hKS+wKfWhSCwn2hTDDX6Op/+HitReeu04I3h5CDImfk3hY7ea rneup4Pv29ozz/8QWnRHmLpHwnLKwhhbz3GikX1jkiMcDOtM+PqlBbN+9JFS+3KIXxN3+C5S3JgM2 JozbO7FSjY+WPm+EiKk158zEMHw94ueAt62gPf9LAtg5xYRAEUjFd9ZpTLa0a0ZuVQ5zctHh/mRf2 a1HXjnVPJG6aya9Cgp0NzB+ZZXVg0TyaIBGPVC5iXLYCyNqz8PwjqmOPQHaWr1i1QK13i6evZdPLB MRjRgtMQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkic-00EM5b-Ei; Fri, 24 Sep 2021 12:54:22 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiM-00ELzy-81 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:07 +0000 Received: by mail-qt1-x849.google.com with SMTP id o6-20020a05622a008600b002a6c2fbc853so15474699qtw.16 for ; Fri, 24 Sep 2021 05:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+LaJv4Ut995m+xStn+CJit8kroqDB8BrAul4VeyUpto=; b=WJfhJpBhFcx1yU6q2A5dq5o7v7/qoJ76LzW3bnFH/OqCKw8Y4BBx7H1VMcJ4tZ7Q84 ycy5/fqgEuZ65y0fbocguVBTBdv8nbPd9ycSN1MZ/ZAy6kKPqAOs9K4UgY4nk42W8VDA jvZpEP6TxuYQ0tLZJycyC0xa/yoyJRa/NGylRXWjBBIUJSwBl6H90fh73Dket8vr+QbB RlBOGE1MBPLxUDMmPF6uPJF2HxT5qdB3y0sJhY2yBAnQRIjroQ1eZj05uFdc4MjgZAGX VAbsty9cYie2RSCLZF24ZKZDUxnrL0oCE+b2BWloYYD1AWXMRq+QxF3CHa+gXfriviqy qWfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+LaJv4Ut995m+xStn+CJit8kroqDB8BrAul4VeyUpto=; b=dSJmGCanIuJF52Dhuh9XM3lKEr0BK15EbT2qpPnC5VhMzkYbTSa/BZKlF8FU1r7fkR Q0SxIpd3pSQV3jXaxp6TzmQLhVbdC2j6S5qSOVrp9/GX5KCn3jjct9HMuuPy4+Z4meiB EMQaTYVcVZJGuj+XhdD6WRfmiPxO/Ps0mPTej6ZExOzSxNNN6o6hNwBr2lm0HjOvEcrU fPVJmNKlEU9DW43V8nLAxcD0Vu8ZPWGYgkWyTXskT+gtHS9UBMD5VcInl6oNSRtB5tlP i/BGaW8Hm2Fq4MzxWWQSvbb5tpAzqfqpiilkmpOGTeBe3HHOGtuV/T+i2/iWcAyIMSeQ MvAg== X-Gm-Message-State: AOAM5330AzZZEp7zeTERh8ovT5OBO9XZs9hI63aKGMqEDTcbnSsu/EcW Ygu/Uj4Bn2K3D7JbXoqsnrKbpZGe0A== X-Google-Smtp-Source: ABdhPJz8XJjf9DmrCXNeHJp2ahBQOSArt9tPtY1bK5GAiDObokO9lS6nGswDDTChpYk6XAR7Ybe+AHcL3Q== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1372:: with SMTP id c18mr9505426qvw.28.1632488043829; Fri, 24 Sep 2021 05:54:03 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:30 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-2-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055406_310897_C0C1CE28 X-CRM114-Status: UNSURE ( 9.67 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a function to check whether a VM is protected (under pKVM). Since the creation of protected VMs isn't enabled yet, this is a placeholder that always returns false. The intention is for this to become a check for protected VMs in the future (see Will's RFC). No functional change intended. Acked-by: Will Deacon Signed-off-by: Fuad Tabba Link: https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/ --- arch/arm64/include/asm/kvm_host.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7cd7d5c8c4bc..adb21a7f0891 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -763,6 +763,11 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); +static inline bool kvm_vm_is_protected(struct kvm *kvm) +{ + return false; +} + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); From patchwork Fri Sep 24 12:53:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3A8CC433EF for ; Fri, 24 Sep 2021 12:56:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C614061528 for ; Fri, 24 Sep 2021 12:56:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C614061528 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Awb9V414mgWlfGoCNtcgSvPyoMiU8T+/zraoUpk0D/g=; b=Jgokt2DLt8PWsuQf5Y2w5NXYF+ 2sOewSoIgbyHyN4Hn5N98P5ztA+jUInz/o5TOLqnts6KCw/WtzeHphv7J/YY0t1C/ji2tmbfwzAWw YCjVUTkYbqax2XsZPOIvARJLls35g6GFCkcsDj3X6ep5y1PA6F8QTu1oI2gS1HHIo1EfS/pHspqZx groKJHcIxubZnOQwWr9CnlJpFxgReHAvgcwfBXxsxanyaMpHrZLpmXZ1OAud411+Jpisj8juE3+iQ j7xYuWLJkWkM4DgbgXNF1Sa2lPJXwDGrJr/Z+MhlPkerHh72K3foMrNRSN5s/U22eboJLguoVl08O TvLVeuUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkis-00EMAZ-AW; Fri, 24 Sep 2021 12:54:38 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiO-00EM0k-3J for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:09 +0000 Received: by mail-wr1-x449.google.com with SMTP id x7-20020a5d6507000000b0015dada209b1so7958220wru.15 for ; Fri, 24 Sep 2021 05:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mlMa5h5NI5jeUNwNwY7Kha/NGSkd/By+lOHKgGZzb0o=; b=SWEU9RA66/WxWXSbnVeHag8LsLPsWo8oCMibTA60uonOa2NMI7kIs3aQaiEtrgWRHO DkA5OZF8LVYzQT/cr+FTPum/wkwXOpSGn6mBFh8390Hiu7bF6dK1T2mmYPopzXIWmMyg ByH+xHvE1XIot1qXxE5Yurw1SAV9R/bSIpEHsErE6bNWq+FTIeI0q3cHMP95HMCchKHy /07xpwd4F+PMcWHR7BQjwKn6vitYvJfybxPNVHcL9VT68KII8o60yJbc7fP7HXCzKda3 0SaLPyDqoC4Fh2pYJUv7rcuaRI3FlofdyqHQc/x/5vPvpDmPLw8FioGLue3WQ2c6a5dA 5IZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mlMa5h5NI5jeUNwNwY7Kha/NGSkd/By+lOHKgGZzb0o=; b=nn6D/aClQqjMWnr7P2estlKSGtL76epQ26jNxQ1Gwqqd6wLCPyt78JytI1REvgib/v batzzQ312KqbL4WvLrU/tSBIbkwZsac4NQzqXmf32iPPfIidi6zyM+ocaoIu+vqxRQ6F mnhd+8YhzUzEn8W4RmA5lTHeh8w+8NjuB5gWpzvLd5RmmdPTDn+PryrOLRBDkF5K8NNP GVyZwCWNT5PpeL71Gs3A7rCpLYeeNPtV5iKp3pV3IiTJPS7Een0OBvLiuEhY/lSzOcQx LCUATyXUPCsi5dbJe0DDBWN+cTkSDSQ1WI2u0zTZxziACnr4WpNE6sPVstXyFzz9u8jq seiw== X-Gm-Message-State: AOAM532YNmZ6tbQKjAyZlmMKFy2G/tAowePqH5+wFtYlVraz4R1pJCds M3+fNCktT+TrDodvO1sU8iVvtV0O3Q== X-Google-Smtp-Source: ABdhPJx6zVX5jtlMJbOyL+BbcPj3DUM9DjQf7WdSun6TSGOTKmjY7ZrgMDMtNn5ZOQ8+TC/ja/ahR0SWWQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:598f:: with SMTP id n15mr11204398wri.74.1632488046306; Fri, 24 Sep 2021 05:54:06 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:31 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-3-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055408_166450_FA913D0B X-CRM114-Status: GOOD ( 10.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Later patches add variables and functions that won't be used immediately. Disable the warnings until the variables are used. Signed-off-by: Fuad Tabba --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index ed669b2d705d..0278bd28bd97 100644 --- a/Makefile +++ b/Makefile @@ -504,7 +504,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ -Werror=implicit-function-declaration -Werror=implicit-int \ -Werror=return-type -Wno-format-security \ - -std=gnu89 + -std=gnu89 -Wno-unused-variable -Wno-unused-function KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_AFLAGS_KERNEL := KBUILD_CFLAGS_KERNEL := From patchwork Fri Sep 24 12:53:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43BA9C433EF for ; Fri, 24 Sep 2021 12:57:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F334660F39 for ; Fri, 24 Sep 2021 12:56:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F334660F39 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WufdCE1G7nMXxUHEiz7brvpJapbF2Gthgi/Y6MHRu7I=; b=ImqjkqV3e3T0ytdwZgknGQDlHM kGx7lKc52jDnY5UBAJg0zdS0dshzVF1RJ6kMpcgQ/ucLP/puJ//B39sVTrXE8zBw7Vs3gLpg1aIu0 DIAmt2xWi7Sh0i6BxB1BQ6Rg667NQy5O5ybo/bTejuslag7aUjkzDlMLKLpY+9VOkimK3oWib/cHC O5Dg7I6z+s1KhezLv087cAQnlakup9qQnZD5I/WxE7cADAwZj3NHnrM0Jknby+HptRhxGSmqQ8FBY 5vjeEamI8CHc8DBfpo+KiN5vS6kkQgnP67hDbT3EoROZHJLVQnPHiPMpA2xhuCRmfhsvfKvu076uP vSM3zQ4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjC-00EMKl-28; Fri, 24 Sep 2021 12:54:58 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiQ-00EM1V-De for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:13 +0000 Received: by mail-wr1-x449.google.com with SMTP id h5-20020a5d6885000000b0015e21e37523so7979190wru.10 for ; Fri, 24 Sep 2021 05:54:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=W+2w1qwCuEn0t2SEv46R5dHRsrtW6N98LhGxQR4ZHMs=; b=fB29omoApOyd7KWBCUSp019JzllSe991ZbrpXIl/8+wD518FgOf68xWf1p080OQzLK 7ef+DcagPMxgJ1sICd9uZ37FbAyEV43bQtw0dqqbG/gJFzVs8A0S/Bb39XrkMckmrqxM 1xQ+TEJK1KB7xTTgmdXlNGKw2wBihF3ZOqPeIzc+8ysz20+GBCRUZot2z/oGTAuMPAoo iN4EUiT0JIIy7df/wYX3JoMf0BR7l/n4skiALWB3CGeAqMwOjDzvwWn/yw0Kt27/4qEP UjBSemQMkNmHZNIT3OiGLvTr/IT7xnhAOOJrXtzxLcGqL285afdsuBvWqD5Jq8CQSmAl 7+wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=W+2w1qwCuEn0t2SEv46R5dHRsrtW6N98LhGxQR4ZHMs=; b=kyJXKyrRxuRBBSWjajJMRwZA7/xhEbguFoMetpVQ5rE31qPvn+4uxAQEZc2Pa4xr03 CUNZ268z0leQrqijgs6fEJ7Bu7V5g8NYOTty4SSyXNkOPvsz6Q+n1fn6nFr3JUOFduuM 4vV5zrTmbWCQAYvIWnybKM7zajAqkXGTotGwq52GFR98Uv/LXOEa/M/MQDDx4lc/LT8B 8esEtBBjZF/6f0nmcFY4oL1r5XXRKPBfemASZ0yVWnr4GMno5ThYBdWAfYKwiQx95pjF A885Yt+FRQiX8k+NRTd+HYHgSuSyG2yCrnswWdZ38Je1bKXmF6oh0HUYmS+ASQZKvtJi ENEg== X-Gm-Message-State: AOAM532ZPKYrkEgTy5THX0QfPxe4qjD46uQ/hp0peegFtSGUrdKDbwGh FSHPZdw4qbsB4QtQ0/M9iTRehM3m/g== X-Google-Smtp-Source: ABdhPJx/AhXU7v63bYO98iWD72IfKcnea82QgZwoY4OwP8dr+1tbs2DMALrsIf7vL8Tvaw9WkugTCl9mXQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:4c14:: with SMTP id z20mr1965423wmf.82.1632488048707; Fri, 24 Sep 2021 05:54:08 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:32 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-4-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055410_517795_D5BF9848 X-CRM114-Status: GOOD ( 16.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These Coccinelle scripts are used in the coming refactoring patches. Adding them as a commit to keep them as part of the history. For running the scripts please use a recent version of Coccinelle, which has this patch [*]. Signed-off-by: Fuad Tabba [*] Link: https://lore.kernel.org/cocci/alpine.DEB.2.22.394.2104211654020.13358@hadrien/T/#t --- cocci_refactor/add_ctxt.cocci | 169 ++++++++++++++++++++++++ cocci_refactor/add_hypstate.cocci | 125 ++++++++++++++++++ cocci_refactor/hyp_ctxt.cocci | 38 ++++++ cocci_refactor/range.cocci | 50 +++++++ cocci_refactor/remove_unused.cocci | 69 ++++++++++ cocci_refactor/test.cocci | 20 +++ cocci_refactor/use_ctxt.cocci | 32 +++++ cocci_refactor/use_ctxt_access.cocci | 39 ++++++ cocci_refactor/use_hypstate.cocci | 63 +++++++++ cocci_refactor/vcpu_arch_ctxt.cocci | 13 ++ cocci_refactor/vcpu_declr.cocci | 59 +++++++++ cocci_refactor/vcpu_flags.cocci | 10 ++ cocci_refactor/vcpu_hyp_accessors.cocci | 35 +++++ cocci_refactor/vcpu_hyp_state.cocci | 30 +++++ cocci_refactor/vgic3_cpu.cocci | 118 +++++++++++++++++ 15 files changed, 870 insertions(+) create mode 100644 cocci_refactor/add_ctxt.cocci create mode 100644 cocci_refactor/add_hypstate.cocci create mode 100644 cocci_refactor/hyp_ctxt.cocci create mode 100644 cocci_refactor/range.cocci create mode 100644 cocci_refactor/remove_unused.cocci create mode 100644 cocci_refactor/test.cocci create mode 100644 cocci_refactor/use_ctxt.cocci create mode 100644 cocci_refactor/use_ctxt_access.cocci create mode 100644 cocci_refactor/use_hypstate.cocci create mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci create mode 100644 cocci_refactor/vcpu_declr.cocci create mode 100644 cocci_refactor/vcpu_flags.cocci create mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci create mode 100644 cocci_refactor/vcpu_hyp_state.cocci create mode 100644 cocci_refactor/vgic3_cpu.cocci diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci new file mode 100644 index 000000000000..203644944ace --- /dev/null +++ b/cocci_refactor/add_ctxt.cocci @@ -0,0 +1,169 @@ +// + +/* +spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place +*/ + + +@exists@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier fc; +@@ +<... +( + struct kvm_vcpu *vcpu = NULL; ++ struct kvm_cpu_context *vcpu_ctxt; +| + struct kvm_vcpu *vcpu = ...; ++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +| + struct kvm_vcpu *vcpu; ++ struct kvm_cpu_context *vcpu_ctxt; +) +<... + vcpu = ...; ++ vcpu_ctxt = &vcpu_ctxt(vcpu); +...> +fc(..., vcpu, ...) +...> + +@exists@ +identifier func != {kvm_arch_vcpu_run_pid_change}; +identifier fc != {vcpu_ctxt}; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +func(..., struct kvm_vcpu *vcpu, ...) { ++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +expression a, b; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +iterator name kvm_for_each_vcpu; +identifier fc; +@@ +kvm_for_each_vcpu(a, vcpu, b) + { ++ vcpu_ctxt = &vcpu_ctxt(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +identifier vcpu_ctxt, vcpu; +iterator name kvm_for_each_vcpu; +type T; +identifier x; +statement S1, S2; +@@ +kvm_for_each_vcpu(...) + { +- vcpu_ctxt = &vcpu_ctxt(vcpu); +... when != S1 ++ vcpu_ctxt = &vcpu_ctxt(vcpu); + S2 + ... when any + } + +@ +disable optional_qualifier +exists +@ +identifier vcpu; +identifier vcpu_ctxt; +@@ +<... + const struct kvm_vcpu *vcpu = ...; +- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +...> + +@disable optional_qualifier@ +identifier func, vcpu; +identifier vcpu_ctxt; +@@ +func(..., const struct kvm_vcpu *vcpu, ...) { +- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +... + } + +@exists@ +expression r1, r2; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +( +- vcpu_gp_regs(vcpu) ++ ctxt_gp_regs(vcpu_ctxt) +| +- vcpu_spsr_abt(vcpu) ++ ctxt_spsr_abt(vcpu_ctxt) +| +- vcpu_spsr_und(vcpu) ++ ctxt_spsr_und(vcpu_ctxt) +| +- vcpu_spsr_irq(vcpu) ++ ctxt_spsr_irq(vcpu_ctxt) +| +- vcpu_spsr_fiq(vcpu) ++ ctxt_spsr_fiq(vcpu_ctxt) +| +- vcpu_fp_regs(vcpu) ++ ctxt_fp_regs(vcpu_ctxt) +| +- __vcpu_sys_reg(vcpu, r1) ++ ctxt_sys_reg(vcpu_ctxt, r1) +| +- __vcpu_read_sys_reg(vcpu, r1) ++ __ctxt_read_sys_reg(vcpu_ctxt, r1) +| +- __vcpu_write_sys_reg(vcpu, r1, r2) ++ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2) +| +- __vcpu_write_spsr(vcpu, r1) ++ __ctxt_write_spsr(vcpu_ctxt, r1) +| +- __vcpu_write_spsr_abt(vcpu, r1) ++ __ctxt_write_spsr_abt(vcpu_ctxt, r1) +| +- __vcpu_write_spsr_und(vcpu, r1) ++ __ctxt_write_spsr_und(vcpu_ctxt, r1) +| +- vcpu_pc(vcpu) ++ ctxt_pc(vcpu_ctxt) +| +- vcpu_cpsr(vcpu) ++ ctxt_cpsr(vcpu_ctxt) +| +- vcpu_mode_is_32bit(vcpu) ++ ctxt_mode_is_32bit(vcpu_ctxt) +| +- vcpu_set_thumb(vcpu) ++ ctxt_set_thumb(vcpu_ctxt) +| +- vcpu_get_reg(vcpu, r1) ++ ctxt_get_reg(vcpu_ctxt, r1) +| +- vcpu_set_reg(vcpu, r1, r2) ++ ctxt_set_reg(vcpu_ctxt, r1, r2) +) + + +/* Handles one case of a call within a call. */ +@@ +expression r1, r2; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +- vcpu_pc(vcpu) ++ ctxt_pc(vcpu_ctxt) + +// diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci new file mode 100644 index 000000000000..e8635d0e8f57 --- /dev/null +++ b/cocci_refactor/add_hypstate.cocci @@ -0,0 +1,125 @@ +// + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" +spatch --sp-file add_hypstate.cocci $FILES --in-place +*/ + +@exists@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier fc; +@@ +<... +( + struct kvm_vcpu *vcpu = NULL; ++ struct vcpu_hyp_state *hyps; +| + struct kvm_vcpu *vcpu = ...; ++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +| + struct kvm_vcpu *vcpu; ++ struct vcpu_hyp_state *hyps; +) +<... + vcpu = ...; ++ hyps = &hyp_state(vcpu); +...> +fc(..., vcpu, ...) +...> + +@exists@ +identifier func != {kvm_arch_vcpu_run_pid_change}; +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier fc; +@@ +func(..., struct kvm_vcpu *vcpu, ...) { ++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +expression a, b; +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +iterator name kvm_for_each_vcpu; +identifier fc; +@@ +kvm_for_each_vcpu(a, vcpu, b) + { ++ hyps = &hyp_state(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +identifier hyps, vcpu; +iterator name kvm_for_each_vcpu; +statement S1, S2; +@@ +kvm_for_each_vcpu(...) + { +- hyps = &hyp_state(vcpu); +... when != S1 ++ hyps = &hyp_state(vcpu); + S2 + ... when any + } + +@ +disable optional_qualifier +exists +@ +identifier vcpu, hyps; +@@ +<... + const struct kvm_vcpu *vcpu = ...; +- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); ++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +...> + + +@@ +identifier func, vcpu, hyps; +@@ +func(..., const struct kvm_vcpu *vcpu, ...) { +- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); ++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +... + } + +@exists@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +@@ +( +- vcpu_hcr_el2(vcpu) ++ hyp_state_hcr_el2(hyps) +| +- vcpu_mdcr_el2(vcpu) ++ hyp_state_mdcr_el2(hyps) +| +- vcpu_vsesr_el2(vcpu) ++ hyp_state_vsesr_el2(hyps) +| +- vcpu_fault(vcpu) ++ hyp_state_fault(hyps) +| +- vcpu_flags(vcpu) ++ hyp_state_flags(hyps) +| +- vcpu_has_sve(vcpu) ++ hyp_state_has_sve(hyps) +| +- vcpu_has_ptrauth(vcpu) ++ hyp_state_has_ptrauth(hyps) +| +- kvm_arm_vcpu_sve_finalized(vcpu) ++ kvm_arm_hyp_state_sve_finalized(hyps) +) + +// diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci new file mode 100644 index 000000000000..af7974e3a502 --- /dev/null +++ b/cocci_refactor/hyp_ctxt.cocci @@ -0,0 +1,38 @@ +// Remove vcpu if all we're using is hypstate and ctxt + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")" +spatch --sp-file hyp_ctxt.cocci $FILES --in-place; +*/ + +// + +@remove@ +identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_"; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +@@ +func(..., +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps +,...) { +?- struct vcpu_hyp_state *hyps_remove = ...; +?- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + } + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier remove.func; +@@ + func( +- vcpu ++ vcpu_ctxt, vcpu_hyps + , ...) + +// \ No newline at end of file diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci new file mode 100644 index 000000000000..d99b9ee30657 --- /dev/null +++ b/cocci_refactor/range.cocci @@ -0,0 +1,50 @@ + + +// + +/* + FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES +*/ + +@initialize:python@ +@@ +starts = ("start", "begin", "from", "floor", "addr", "kaddr") +ends = ("size", "length", "len") + +//ends = ("end", "to", "ceiling", "size", "length", "len") + + +@start_end@ +identifier f; +type A, B; +identifier start, end; +parameter list[n] ps; +@@ +f(ps, A start, B end, ...) { +... +} + +@script:python@ +start << start_end.start; +end << start_end.end; +ta << start_end.A; +tb << start_end.B; +@@ + +if ta != tb and tb != "size_t": + cocci.include_match(False) +elif not any(x in start for x in starts) and not any(x in end for x in ends): + cocci.include_match(False) + +@@ +identifier f = start_end.f; +expression list[start_end.n] xs; +expression a, b; +@@ +( +* f(xs, a, a, ...) +| +* f(xs, a, a - b, ...) +) + +// \ No newline at end of file diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci new file mode 100644 index 000000000000..c06278398198 --- /dev/null +++ b/cocci_refactor/remove_unused.cocci @@ -0,0 +1,69 @@ +// + +/* +spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff +*/ + +@@ +identifier hyps; +@@ +{ +... +( +- struct vcpu_hyp_state *hyps = ...; +| +- struct vcpu_hyp_state *hyps; +) +... when != hyps + when != if (...) { <+...hyps...+> } +?- hyps = ...; +... when != hyps + when != if (...) { <+...hyps...+> } +} + +@@ +identifier vcpu_ctxt; +@@ +{ +... +( +- struct kvm_cpu_context *vcpu_ctxt = ...; +| +- struct kvm_cpu_context *vcpu_ctxt; +) +... when != vcpu_ctxt + when != if (...) { <+...vcpu_ctxt...+> } +?- vcpu_ctxt = ...; +... when != vcpu_ctxt + when != if (...) { <+...vcpu_ctxt...+> } +} + +@@ +identifier x; +identifier func; +statement S; +@@ +func(...) + { +... +struct kvm_cpu_context *x = ...; ++ +S +... + } + +@@ +identifier x; +identifier func; +statement S; +@@ +func(...) + { +... +struct vcpu_hyp_state *x = ...; ++ +S +... + } + +// diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci new file mode 100644 index 000000000000..5eb685240ce7 --- /dev/null +++ b/cocci_refactor/test.cocci @@ -0,0 +1,20 @@ +/* + FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES + +*/ + +@r@ +identifier fn; +@@ +fn(...) { + hello; + ... +} + +@@ +identifier r.fn; +@@ +static fn(...) { ++ world; + ... +} diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci new file mode 100644 index 000000000000..f3f961f567fd --- /dev/null +++ b/cocci_refactor/use_ctxt.cocci @@ -0,0 +1,32 @@ +// +/* +spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place +spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place +*/ + +@remove_vcpu@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier ctxt_remove; +identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)"; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt +, ...) { +- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } +} + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier func = remove_vcpu.func; +@@ +func( +- vcpu ++ vcpu_ctxt + , ...) + +// diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci new file mode 100644 index 000000000000..74f94141e662 --- /dev/null +++ b/cocci_refactor/use_ctxt_access.cocci @@ -0,0 +1,39 @@ +// + +/* +spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place +*/ + +@@ +constant r; +@@ +- __ctxt_sys_reg(&vcpu->arch.ctxt, r) ++ &__vcpu_sys_reg(vcpu, r) + +@@ +identifier r; +@@ +- vcpu->arch.ctxt.regs.r ++ vcpu_gp_regs(vcpu)->r + +@@ +identifier r; +@@ +- vcpu->arch.ctxt.fp_regs.r ++ vcpu_fp_regs(vcpu)->r + +@@ +identifier r; +fresh identifier accessor = "vcpu_" ## r; +@@ +- &vcpu->arch.ctxt.r ++ accessor(vcpu) + +@@ +identifier r; +fresh identifier accessor = "vcpu_" ## r; +@@ +- vcpu->arch.ctxt.r ++ *accessor(vcpu) + +// \ No newline at end of file diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci new file mode 100644 index 000000000000..f685149de748 --- /dev/null +++ b/cocci_refactor/use_hypstate.cocci @@ -0,0 +1,63 @@ +// + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" +spatch --sp-file use_hypstate.cocci $FILES --in-place +*/ + + +@remove_vcpu_hyps@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier func; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct vcpu_hyp_state *hyps +, ...) { +- struct vcpu_hyp_state *hyps_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } +} + +@@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier func = remove_vcpu_hyps.func; +@@ +func( +- vcpu ++ hyps + , ...) + +@remove_vcpu_hyps_ctxt@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +identifier func; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct vcpu_hyp_state *hyps +, ...) { +- struct vcpu_hyp_state *hyps_remove = ...; +- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } + when != ctxt_remove + when != if (...) { <+...ctxt_remove...+> } +} + +@@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier func = remove_vcpu_hyps_ctxt.func; +@@ +func( +- vcpu ++ hyps + , ...) + +// diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci new file mode 100644 index 000000000000..69b3a000de4e --- /dev/null +++ b/cocci_refactor/vcpu_arch_ctxt.cocci @@ -0,0 +1,13 @@ +// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers --dir arch/arm64 + +// +@@ +identifier vcpu; +@@ +( +- vcpu->arch.ctxt.regs ++ vcpu_gp_regs(vcpu) +| +- vcpu->arch.ctxt.fp_regs ++ vcpu_fp_regs(vcpu) +) diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci new file mode 100644 index 000000000000..59cd46bd6b2d --- /dev/null +++ b/cocci_refactor/vcpu_declr.cocci @@ -0,0 +1,59 @@ + +/* +FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file vcpu_declr.cocci $FILES --in-place +*/ + +// + +@@ +identifier vcpu; +expression E; +@@ +<... +- struct kvm_vcpu *vcpu; ++ struct kvm_vcpu *vcpu = E; + +- vcpu = E; +...> + + +/* +@@ +identifier vcpu; +identifier f1, f2; +@@ +f1(...) +{ +- struct kvm_vcpu *vcpu = NULL; ++ struct kvm_vcpu *vcpu; +... when != f2(..., vcpu, ...) +} +*/ + +/* +@find_after@ +identifier vcpu; +position p; +identifier f; +@@ +<... + struct kvm_vcpu *vcpu@p; + ... when != vcpu = ...; + f(..., vcpu, ...); +...> + +@@ +identifier vcpu; +expression E; +position p != find_after.p; +@@ +<... +- struct kvm_vcpu *vcpu@p; ++ struct kvm_vcpu *vcpu = E; + ... +- vcpu = E; +...> + +*/ + +// diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci new file mode 100644 index 000000000000..609bb7bd7bd0 --- /dev/null +++ b/cocci_refactor/vcpu_flags.cocci @@ -0,0 +1,10 @@ +// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers --dir arch/arm64 + +// +@@ +expression vcpu; +@@ + +- vcpu->arch.flags ++ vcpu_flags(vcpu) +// \ No newline at end of file diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci new file mode 100644 index 000000000000..506b56f7216f --- /dev/null +++ b/cocci_refactor/vcpu_hyp_accessors.cocci @@ -0,0 +1,35 @@ +// + +/* +spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place +*/ + +@find_defines@ +identifier macro; +identifier vcpu; +position p; +@@ +#define macro(vcpu) vcpu@p + +@@ +identifier vcpu; +position p != find_defines.p; +@@ +( +- vcpu@p->arch.hcr_el2 ++ vcpu_hcr_el2(vcpu) +| +- vcpu@p->arch.mdcr_el2 ++ vcpu_mdcr_el2(vcpu) +| +- vcpu@p->arch.vsesr_el2 ++ vcpu_vsesr_el2(vcpu) +| +- vcpu@p->arch.fault ++ vcpu_fault(vcpu) +| +- vcpu@p->arch.flags ++ vcpu_flags(vcpu) +) + +// diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci new file mode 100644 index 000000000000..3005a6f11871 --- /dev/null +++ b/cocci_refactor/vcpu_hyp_state.cocci @@ -0,0 +1,30 @@ +// + +// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers --dir arch/arm64 --very-quiet --in-place + +@@ +expression vcpu; +@@ +- vcpu->arch. ++ vcpu->arch.hyp_state. +( + hcr_el2 +| + mdcr_el2 +| + vsesr_el2 +| + fault +| + flags +| + sysregs_loaded_on_cpu +) + +@@ +identifier arch; +@@ +- arch.fault ++ arch.hyp_state.fault + +// \ No newline at end of file diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci new file mode 100644 index 000000000000..f7495b2e49cb --- /dev/null +++ b/cocci_refactor/vgic3_cpu.cocci @@ -0,0 +1,118 @@ +// + +/* +spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place +*/ + + +@@ +identifier vcpu; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +@@ +( +- kvm_vcpu_sys_get_rt ++ kvm_hyp_state_sys_get_rt +| +- kvm_vcpu_get_esr ++ kvm_hyp_state_get_esr +) +- (vcpu) ++ (vcpu_hyps) + +@add_cpu_if@ +identifier func; +identifier c; +@@ +int func( +- struct kvm_vcpu *vcpu ++ struct vgic_v3_cpu_if *cpu_if + , ...) +{ +<+... +- vcpu->arch.vgic_cpu.vgic_v3.c ++ cpu_if->c +...+> +} + +@@ +identifier func = add_cpu_if.func; +@@ + func( +- vcpu ++ cpu_if + , ... + ) + + +@add_vgic_ctxt_hyps@ +identifier func; +@@ +void func( +- struct kvm_vcpu *vcpu ++ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps + , ...) { +?- struct vcpu_hyp_state *vcpu_hyps = ...; +?- struct kvm_cpu_context *vcpu_ctxt = ...; + ... + } + +@@ +identifier func = add_vgic_ctxt_hyps.func; +@@ + func( +- vcpu, ++ cpu_if, vcpu_ctxt, vcpu_hyps, + ... + ) + + +@find_calls@ +identifier fn; +type a, b; +@@ +- void (*fn)(struct kvm_vcpu *, a, b); ++ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b); + +@@ +identifier fn = find_calls.fn; +identifier a, b; +@@ +- fn(vcpu, a, b); ++ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b); + +@@ +@@ +int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { ++ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; +... +} + +@remove@ +identifier func; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +@@ +func(..., +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps +,...) { +?- struct vcpu_hyp_state *hyps_remove = ...; +?- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + } + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier remove.func; +@@ + func( +- vcpu ++ vcpu_ctxt, vcpu_hyps + , ...) + +// From patchwork Fri Sep 24 12:53:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CE45C433F5 for ; Fri, 24 Sep 2021 12:57:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1A9076138F for ; Fri, 24 Sep 2021 12:57:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1A9076138F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Fd0rLkNnDIcENiUddPFWIFg0bVUERWd1+M0A2VMrigs=; b=tGh5SZ2BsRMLqS17R6NthcShgv yAXMj79elazrRK/FggPWfAad4A09hTMk1NfEfQiIwSGMkevVaLsVXZPs2nXeDQBgIXOIHRPxNQZ5M 9peztN5xGFL6jG1NuSYWG+Uahjj3tEXR0q9Sw+8O4uY1VAPZfItlzscBPPO2tj5nyWy3phPloNsd7 jLXNil0kTbCO2YlvKbh08WFgmFtUJU+Xi0qx2+U9kNxf964qNZDLVuqWYX2iVkRhuX5S5AnjgiZrZ jRsYRUxPbG2iS6DEykTqMHazsfrADgZGOV2cKgYO9rwiylCnArtAEJw00SOG4QwiA+gcBDi8S6ArK +Id7y0pA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkja-00EMTw-Iq; Fri, 24 Sep 2021 12:55:22 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiT-00EM1u-Ty for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:15 +0000 Received: by mail-ed1-x549.google.com with SMTP id r11-20020aa7cfcb000000b003d4fbd652b9so10105084edy.14 for ; Fri, 24 Sep 2021 05:54:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NDvJPFQrogpAsPTwpI9wbd6GY9uLrhdDShc06jG7Kb4=; b=ofmwdIYsCOT3dwUKE9fs1S0wlAahS1/K44MZDjnE5gAUoqkmdR2JLYgcKHIfptReIX dndQywZVRlFuQ3eOGRYinLFu/MngUVLE+8LX51SRkzUM7CBa6Rel5x8axNZa4uL1zp2U wJ0d3vM7NVwH6J3XmsmRgs8W0rTFnsn3wb9mC3nhOrcNH1NR5ZUwDA9tPRh1JDDH4f7l zQvYCm3Kkr3QWGWjZ1LC7iqi68tZn7ouspyCGugcVjLxryYO+CyIMSNqEVdSrV7xuY4w ABuCGZ9OVVMOUxX2tJIa+3AE5Tc9C+idHAo8MF2y4xXovo527JFfadzqGEw6vk6+5pL5 tqKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NDvJPFQrogpAsPTwpI9wbd6GY9uLrhdDShc06jG7Kb4=; b=BnrZO1FjruJmAHJ5344tVipGhpSrXA38+cyXlkTJvtbZecnfcykW/nYhgrbXKb/9Bd Od+JktV/c3NoMN5CLPse4NmqvwEwbhCDY08uZaom4Zu9BYjWwV0PcZSzaZnkNnIC3GJv 5RmxwWiZ9vs2xzZ4dIAZM0T0UYOOkMfCZtm6l0VHYpYja4Jf+OybtfGguwRzLfJx8rjJ SWDnz0IzlugXOCKCmEdHdWeyMhCxjmHNM1kaUC+sChlL1ed++s4NWspkoBsskLjvF1Rf ITcUl4YtlfmPKCizgdLPWJzO8M8PrpGYWIgx4mD45Xs297UzbrYxOKrv6meoHx/Gc+2r 7tsQ== X-Gm-Message-State: AOAM531O1QgkrE39ifWHTsXwsGWx/NMtcohGOJ+jfOpPOGEBL5EtVWcE SjKAGYT/g9L6M0h5z3w4/NZDctoUDA== X-Google-Smtp-Source: ABdhPJxKKemydyBQbjYJSypMRsSre41xZtGQ0H5ytHahnnYcVhn4lCAlWI7RlpAOylFBNu1TXAL96enMEA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a50:e10d:: with SMTP id h13mr4772015edl.77.1632488050517; Fri, 24 Sep 2021 05:54:10 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:33 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-5-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055413_993430_2D5F180C X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Remove unused vcpu function parameters and asm-offset definitions. Cleaner code and simplifies future refactoring. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 4 ++-- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 ++-- 4 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..2e2b60a1b6c7 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -66,8 +66,8 @@ void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ -void __timer_enable_traps(struct kvm_vcpu *vcpu); -void __timer_disable_traps(struct kvm_vcpu *vcpu); +void __timer_enable_traps(void); +void __timer_disable_traps(void); #endif #ifdef __KVM_NVHE_HYPERVISOR__ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 0cb34ccb6e73..c2cc3a2813e6 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -109,7 +109,6 @@ int main(void) DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); - DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1])); DEFINE(CPU_APIBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1])); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..9296d7108f93 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -217,7 +217,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __activate_traps(vcpu); __hyp_vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); + __timer_enable_traps(); __debug_switch_to_guest(vcpu); @@ -230,7 +230,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); - __timer_disable_traps(vcpu); + __timer_disable_traps(); __hyp_vgic_save_state(vcpu); __deactivate_traps(vcpu); @@ -272,7 +272,7 @@ void __noreturn hyp_panic(void) vcpu = host_ctxt->__hyp_running_vcpu; if (vcpu) { - __timer_disable_traps(vcpu); + __timer_disable_traps(); __deactivate_traps(vcpu); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index 9072e71693ba..7b2a23ccdb0a 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -19,7 +19,7 @@ void __kvm_timer_set_cntvoff(u64 cntvoff) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __timer_disable_traps(struct kvm_vcpu *vcpu) +void __timer_disable_traps(void) { u64 val; @@ -33,7 +33,7 @@ void __timer_disable_traps(struct kvm_vcpu *vcpu) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __timer_enable_traps(struct kvm_vcpu *vcpu) +void __timer_enable_traps(void) { u64 val; From patchwork Fri Sep 24 12:53:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 645FBC433EF for ; Fri, 24 Sep 2021 12:58:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A022613A8 for ; Fri, 24 Sep 2021 12:58:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3A022613A8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=cD4uA164aE8IA5ERdVQHDQEP72mWs604yb3DiHQaB7k=; b=B2PuprdNhfzxNUiQu7RqLEUz3q pDTmQpMFspLLqO9WQIIgdQ2Tka3yHxmi4xpySxvVHFI8ucSngeifN9mjNkA6Sr7v41gUEbiZV09Iw psvS3aI4SxFtY2TvnjmmqQomNCc1XzyiWVdUSgrjpTpFDxOMcfR6vs1kbMptbYtqyoeZHjg8typi6 UhYn14aTntXsWFsbKu4XhNHApAXX0JcEZh0naOHRx5UxasMPh/ruJ0Ege+DtlL0h7cw2f9hQMRHw4 JEON2KbbIJwiDMoTk7xTtgjuq3INKAZFoAMDmckyvipnBCTo4f+0mg36FfL3qlkdNWGCCwiJm60Us +FyToh1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkk5-00EMiZ-I2; Fri, 24 Sep 2021 12:55:54 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiU-00EM2W-D2 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:16 +0000 Received: by mail-wr1-x449.google.com with SMTP id s13-20020adfeccd000000b00160531902f4so284895wro.2 for ; Fri, 24 Sep 2021 05:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=c3qdDK6CwCMhdcn2tEJ6TpJKVu10uhj+XXDIPCtJjb0=; b=psNFg6nCmH9EU5Pajcz2UKWfcUW6CIkoTg6ob88wrS/HBXB3AkOhMbUc4AFLeMA0Sp 7lu9JHDpVOWOXTNsn+YcIqR2zulAavdwwQM2YXe6C97wYk874q3MJIcDhun/XPLkM3pd aTmQUfLBonBQGjH0b4Xbo57FTjmUG/w7C0EUsuWgalUsbrG9wGYx1riQpbFpAJkE9a9T ISA5Y4eXtmuXhHAdn7YguY2m9S7EzU47PoSqQq64YqMpVEJUtiCQNauB8oza5MU9Z9Pp DUh1ObVuEjj9HZAXz32oTdLCLWK0TGm4SzdzVB1bDoeIwUh+cRZCmAJP4TcEm0Lov7cg ULBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=c3qdDK6CwCMhdcn2tEJ6TpJKVu10uhj+XXDIPCtJjb0=; b=LVUQeAEL3MJ/Zz22zUnbN0xG+vDzSAM1nbfWdfJuqC3AZCgQNgZHg87O4SCjWscPZb ybmgyTvdMzSwOde/XTkknurATav4BAeCE+RSirXhrie4jlQpLfO51ZvOY5jfl+q9LTA6 2OVwDD0VftwnQbn1Q8AbHlxKfGgisiUsFdrTAuJ7Fj435lUtM7iZd/HDi6i609bkqwIz KghbspD3cT+yYouEGkJrsHl9fpekOIo1eFalrlA9tOLAx6ciAwyn5cPULm4KvBBDBjOk NUJl7mxIp0oTzx1cP0DlqQw/Whoc4spJDrgL8oqOAtVj+geSrweC8mqkC1Y8ZVpyIG2T 2G4w== X-Gm-Message-State: AOAM530qPcIVywH/F7Tk9pA1NKxsIXVa8fw05Bm0j1L5F7uTrfvb95qy p0nCgjN6TKb+b0b6joDL0o/4fRUY8g== X-Google-Smtp-Source: ABdhPJyHSTQAdyddoCnAQ1t8SsXdoW5w2rUjyZviDC87bdJ5+MrSTkFU5/hlW9/ETaN0jl9llsuMltIXvQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c766:: with SMTP id x6mr1928628wmk.53.1632488052437; Fri, 24 Sep 2021 05:54:12 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:34 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-6-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055414_491825_FF596040 X-CRM114-Status: GOOD ( 13.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add accessors to get/set elements of struct kvm_cpu_context. Simplifies future refactoring, and makes the code more consistent. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 53 ++++++++++++++++++++++------ arch/arm64/include/asm/kvm_host.h | 18 +++++++++- arch/arm64/kvm/hyp/exception.c | 43 +++++++++++++++++----- 3 files changed, 94 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 01b9857757f2..ad6e53cef1a4 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -127,19 +127,34 @@ static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) vcpu->arch.vsesr_el2 = vsesr; } +static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt) +{ + return (unsigned long *)&ctxt_gp_regs(ctxt)->pc; +} + static __always_inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu_gp_regs(vcpu)->pc; + return ctxt_pc(&vcpu_ctxt(vcpu)); +} + +static __always_inline unsigned long *ctxt_cpsr(const struct kvm_cpu_context *ctxt) +{ + return (unsigned long *)&ctxt_gp_regs(ctxt)->pstate; } static __always_inline unsigned long *vcpu_cpsr(const struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu_gp_regs(vcpu)->pstate; + return ctxt_cpsr(&vcpu_ctxt(vcpu)); +} + +static __always_inline bool ctxt_mode_is_32bit(const struct kvm_cpu_context *ctxt) +{ + return !!(*ctxt_cpsr(ctxt) & PSR_MODE32_BIT); } static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) { - return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT); + return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu)); } static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) @@ -150,27 +165,45 @@ static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) return true; } +static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt) +{ + *ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT; +} + static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu) { - *vcpu_cpsr(vcpu) |= PSR_AA32_T_BIT; + ctxt_set_thumb(&vcpu_ctxt(vcpu)); } /* - * vcpu_get_reg and vcpu_set_reg should always be passed a register number - * coming from a read of ESR_EL2. Otherwise, it may give the wrong result on - * AArch32 with banked registers. + * vcpu/ctxt_get_reg and vcpu/ctxt_set_reg should always be passed a register + * number coming from a read of ESR_EL2. Otherwise, it may give the wrong result + * on AArch32 with banked registers. */ +static __always_inline unsigned long +ctxt_get_reg(const struct kvm_cpu_context *ctxt, u8 reg_num) +{ + return (reg_num == 31) ? 0 : ctxt_gp_regs(ctxt)->regs[reg_num]; +} + +static __always_inline void +ctxt_set_reg(struct kvm_cpu_context *ctxt, u8 reg_num, unsigned long val) +{ + if (reg_num != 31) + ctxt_gp_regs(ctxt)->regs[reg_num] = val; +} + static __always_inline unsigned long vcpu_get_reg(const struct kvm_vcpu *vcpu, u8 reg_num) { - return (reg_num == 31) ? 0 : vcpu_gp_regs(vcpu)->regs[reg_num]; + return ctxt_get_reg(&vcpu_ctxt(vcpu), reg_num); + } static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num, unsigned long val) { - if (reg_num != 31) - vcpu_gp_regs(vcpu)->regs[reg_num] = val; + ctxt_set_reg(&vcpu_ctxt(vcpu), reg_num, val); } /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index adb21a7f0891..097e5f533af9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -446,7 +446,23 @@ struct kvm_vcpu_arch { #define vcpu_has_ptrauth(vcpu) false #endif -#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) +#define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt) + +/* VCPU Context accessors (direct) */ +#define ctxt_gp_regs(c) (&(c)->regs) +#define ctxt_spsr_abt(c) (&(c)->spsr_abt) +#define ctxt_spsr_und(c) (&(c)->spsr_und) +#define ctxt_spsr_irq(c) (&(c)->spsr_irq) +#define ctxt_spsr_fiq(c) (&(c)->spsr_fiq) +#define ctxt_fp_regs(c) (&(c)->fp_regs) + +/* VCPU Context accessors */ +#define vcpu_gp_regs(v) ctxt_gp_regs(&vcpu_ctxt(v)) +#define vcpu_spsr_abt(v) ctxt_spsr_abt(&vcpu_ctxt(v)) +#define vcpu_spsr_und(v) ctxt_spsr_und(&vcpu_ctxt(v)) +#define vcpu_spsr_irq(v) ctxt_spsr_irq(&vcpu_ctxt(v)) +#define vcpu_spsr_fiq(v) ctxt_spsr_fiq(&vcpu_ctxt(v)) +#define vcpu_fp_regs(v) ctxt_fp_regs(&vcpu_ctxt(v)) /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 11541b94b328..643c5844f684 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -18,43 +18,68 @@ #error Hypervisor code only! #endif -static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) +static inline u64 __ctxt_read_sys_reg(const struct kvm_cpu_context *vcpu_ctxt, int reg) { u64 val; if (__vcpu_read_sys_reg_from_cpu(reg, &val)) return val; - return __vcpu_sys_reg(vcpu, reg); + return ctxt_sys_reg(vcpu_ctxt, reg); } -static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) +static inline void __ctxt_write_sys_reg(struct kvm_cpu_context *vcpu_ctxt, u64 val, int reg) { if (__vcpu_write_sys_reg_to_cpu(val, reg)) return; - __vcpu_sys_reg(vcpu, reg) = val; + ctxt_sys_reg(vcpu_ctxt, reg) = val; } -static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr(struct kvm_cpu_context *vcpu_ctxt, u64 val) { write_sysreg_el1(val, SYS_SPSR); } -static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr_abt(struct kvm_cpu_context *vcpu_ctxt, u64 val) { if (has_vhe()) write_sysreg(val, spsr_abt); else - vcpu->arch.ctxt.spsr_abt = val; + *ctxt_spsr_abt(vcpu_ctxt) = val; } -static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) { if (has_vhe()) write_sysreg(val, spsr_und); else - vcpu->arch.ctxt.spsr_und = val; + *ctxt_spsr_und(vcpu_ctxt) = val; +} + +static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) +{ + return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); +} + +static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) +{ + __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); +} + +static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); +} + +static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); +} + +static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } /* From patchwork Fri Sep 24 12:53:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86568C4332F for ; Fri, 24 Sep 2021 12:58:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 58DF6613A5 for ; Fri, 24 Sep 2021 12:58:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 58DF6613A5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=eyLVBQLtDRRny3j0vpT45O74zCD0M8Q8rS2GQ9j6TV0=; b=136j3UCE9LtQaPPQQiBFEPtf/M Wb3T12rmNLSKyM/lrAukMqSHeQuOKTHJ2ho0eQyvwSVJioeKb6SXHeGYmdmO1adq3/e/11Lbztwq+ rW5hn1C5/c4525K5Rke+x9IXcmSvPWM/qio1mygLBySOwLEr6zeroOID8AXsBAbzPg+7fhEQzs7eZ 6oIcJreCdZnLbe+DG2BtLSK/GGvqim9cCtGZWlFzS3usBUq+pVjFH/S/R7opxl3SgDADC/+NAzbMA 6nE5wJ6uVRZdjHWxxjehcx17XuMAXgKX09DOZa7a3GNzfMn0kZJASXjMBlaOTut3Ik7Tk1LMVHQdl Y522jMVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkkb-00EMwD-FM; Fri, 24 Sep 2021 12:56:26 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiW-00EM3O-EQ for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:18 +0000 Received: by mail-wr1-x449.google.com with SMTP id f11-20020adfc98b000000b0015fedc2a8d4so8064062wrh.0 for ; Fri, 24 Sep 2021 05:54:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GNJXCwSAbei7X+d2zrSyKPjr3s2IaQgd3QwFYy/hAIU=; b=FgWbPgZcipmM7hDQfUPsPO0ieDHPnx6Sh8kB3YNh80tJE+M54GZgHyARNdv3SkWMP9 FSzi0J6f9EIJPfjbRf1AFSSOW1Jo+/pz6JNlawA8iXeg5Fp9pgKDq/hCIO6xw1A4rWop 5Q/QA+HIOhZ9eNuVlB0g2c3HSZX9DjA6BCPGN8x+H/jI/mqgI8+MrF0sHOhjfaag8B0b 1E8Vfe1o34gnJVeKjdcqGdedCnwx9+69P0v2p9mdnzUuHO9uu1/r3/fEA9dhtQGc2nZM 6tkk2EPfnh1zCzD55dDl/i7LIDH7jHzhKj2v3/lcj8EjKNnCO4AHNo0s09MGfzgAvGuD Pi4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GNJXCwSAbei7X+d2zrSyKPjr3s2IaQgd3QwFYy/hAIU=; b=bKtPOhKYF7tpps3uu2fz1yf895qfs72HYTU0jC/6CUuKDMkEs+5XN3ZgwQdDu1NGmD Z4dlBoM1zq4FyQqxTuvumjznoImGgqtdCNmq4H5o2xmjziRo5mIaPbmsTB1hycELGl7L lraTvLEkzD6ANWrjS+xLXKWgHKL3zD+7q5CZTU+o+HN3iVTwad8lrjslwCY8V48YKVa/ w/cseZ76LFXdbIJDFgijnnmG1sPYyB8IgUijRKLJZOABLj/NP7ldmv9WHQO1FYT5ijfg z7EEph3PUHb36W0VWuDeXflq8++j/7BtYuyesJgijRNb2b29nj28ienD/DwAHAF7C7Uw JgpA== X-Gm-Message-State: AOAM533uqfXY5/tPlPgdnIzcrDN8gYyS7k82DuzlhD0VFs4cyuRKRuWp FcJ1jDOPH2EmDtGIT4qE3TJXEmGd9Q== X-Google-Smtp-Source: ABdhPJyoJ4ezBcFbxEorS+BChh9gJ7Ip4MIZRiUmXZuwwPnp+ovEAnjm0lg/pp1rPexQj9d9ps3gIy+HdA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:896:: with SMTP id l22mr1903656wmp.173.1632488054622; Fri, 24 Sep 2021 05:54:14 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:35 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-7-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055416_520220_6940782C X-CRM114-Status: GOOD ( 15.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some parts of the code access vcpu->arch.ctxt directly instead of using existing accessors. Refactor to use the existing accessors to make the code more consistent and to simplify future patches. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place Signed-off-by: Fuad Tabba --- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 28 +++++++++++----------- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 16 ++++++------- arch/arm64/kvm/reset.c | 10 ++++---- 5 files changed, 30 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5621020b28de..db135588236a 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -97,7 +97,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { - fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, + fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu), vcpu->arch.sve_state, vcpu->arch.sve_max_vl); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5cb4a1cd5603..c4429307a164 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -116,49 +116,49 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) KVM_REG_ARM_CORE_REG(regs.regs[30]): off -= KVM_REG_ARM_CORE_REG(regs.regs[0]); off /= 2; - return &vcpu->arch.ctxt.regs.regs[off]; + return &vcpu_gp_regs(vcpu)->regs[off]; case KVM_REG_ARM_CORE_REG(regs.sp): - return &vcpu->arch.ctxt.regs.sp; + return &vcpu_gp_regs(vcpu)->sp; case KVM_REG_ARM_CORE_REG(regs.pc): - return &vcpu->arch.ctxt.regs.pc; + return &vcpu_gp_regs(vcpu)->pc; case KVM_REG_ARM_CORE_REG(regs.pstate): - return &vcpu->arch.ctxt.regs.pstate; + return &vcpu_gp_regs(vcpu)->pstate; case KVM_REG_ARM_CORE_REG(sp_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1); + return &__vcpu_sys_reg(vcpu, SP_EL1); case KVM_REG_ARM_CORE_REG(elr_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1); + return &__vcpu_sys_reg(vcpu, ELR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1); + return &__vcpu_sys_reg(vcpu, SPSR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]): - return &vcpu->arch.ctxt.spsr_abt; + return vcpu_spsr_abt(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_UND]): - return &vcpu->arch.ctxt.spsr_und; + return vcpu_spsr_und(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_IRQ]): - return &vcpu->arch.ctxt.spsr_irq; + return vcpu_spsr_irq(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_FIQ]): - return &vcpu->arch.ctxt.spsr_fiq; + return vcpu_spsr_fiq(vcpu); case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ... KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]): off -= KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]); off /= 4; - return &vcpu->arch.ctxt.fp_regs.vregs[off]; + return &vcpu_fp_regs(vcpu)->vregs[off]; case KVM_REG_ARM_CORE_REG(fp_regs.fpsr): - return &vcpu->arch.ctxt.fp_regs.fpsr; + return &vcpu_fp_regs(vcpu)->fpsr; case KVM_REG_ARM_CORE_REG(fp_regs.fpcr): - return &vcpu->arch.ctxt.fp_regs.fpcr; + return &vcpu_fp_regs(vcpu)->fpcr; default: return NULL; diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..9fa9cf71eefa 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -217,7 +217,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr); + &vcpu_fp_regs(vcpu)->fpsr); write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } @@ -276,7 +276,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (sve_guest) __hyp_sve_restore_guest(vcpu); else - __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs); + __fpsimd_restore_state(vcpu_fp_regs(vcpu)); /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index cce43bfe158f..9451206f512e 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -161,10 +161,10 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) if (!vcpu_el1_is_32bit(vcpu)) return; - vcpu->arch.ctxt.spsr_abt = read_sysreg(spsr_abt); - vcpu->arch.ctxt.spsr_und = read_sysreg(spsr_und); - vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq); - vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq); + *vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt); + *vcpu_spsr_und(vcpu) = read_sysreg(spsr_und); + *vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq); + *vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq); __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); @@ -178,10 +178,10 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) if (!vcpu_el1_is_32bit(vcpu)) return; - write_sysreg(vcpu->arch.ctxt.spsr_abt, spsr_abt); - write_sysreg(vcpu->arch.ctxt.spsr_und, spsr_und); - write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq); - write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq); + write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt); + write_sysreg(*vcpu_spsr_und(vcpu), spsr_und); + write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq); + write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq); write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index d37ebee085cf..ab1ef5313a3e 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -258,11 +258,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) /* Reset core registers */ memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; + memset(vcpu_fp_regs(vcpu), 0, sizeof(*vcpu_fp_regs(vcpu))); + *vcpu_spsr_abt(vcpu) = 0; + *vcpu_spsr_und(vcpu) = 0; + *vcpu_spsr_irq(vcpu) = 0; + *vcpu_spsr_fiq(vcpu) = 0; vcpu_gp_regs(vcpu)->pstate = pstate; /* Reset system registers */ From patchwork Fri Sep 24 12:53:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D6A6C43219 for ; Fri, 24 Sep 2021 12:58:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3E250615A2 for ; Fri, 24 Sep 2021 12:58:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3E250615A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=i/1MJVJMX7+RvtiZ28Ff2HpS8JOaqGFZWyBvyOQhXu4=; b=efkl06QVcAOLvWaph4kKjSVExF qrtjuHxx0b6ddQoEbuLrZ5vdrJPTDsULu98oubxz0K15yvTJ48Qsjdqexst3my+b/FiO2Cdj1L1Rm +icDwFsYVPsCo829FVxBO1I5GG7ocyRFS8q1DHLjxiRxsdQho3BsK0aQLdHSJoY2281nr8GGDFbeq 1YVXQmommXY7diGart/DsNt1N/oksYFsn4YzIy0x7dullo5pbVpU7/QNBCgBh015Pb2598DkdbqKG cD9zQe7KejLknra+x0YUbcI3t2vaedo/3pBTD1YwIp7/n33FAbPAIJVaZLBnC1HW7xhMmrMMlvkm5 efZDBmdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkl2-00EN6m-6g; Fri, 24 Sep 2021 12:56:53 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiY-00EM3y-DP for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:21 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id cu18-20020a05621417d200b003822ed8f245so1004307qvb.8 for ; Fri, 24 Sep 2021 05:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=F9WSvqQOfeqKzN2mfHECw3MDDtOFEm1amOCByz9riQE=; b=JRSKYLTOLSlmbQC3xeCnfYXH/BoVevCDKh9zuwXD8mBoe2ClKe1e/8DarBLh1KSTEL /SWP7ZB1s+omW/axGl+xSpdjcvqr9Mg9Yhu0b5to+SHqSoroNuCbXF9/kLlXJP64rFY7 PHyphaJQYjA0v3AH/APFk1cse6NCxyDfKg8biAiG9a6lNpflN77vnneb/ZDBjkDlY3+n 57bNJdFXgPfV7PMBIfylZoPIVLNcfBsAMVoocxyhLH4iWhYBYa7Zv4S51fFolY8H2EnP vSWKfmqN0jWKuB05qpa2axsQ6HoAviNloRqaokrucxkdlLXeGtmBnk0M2QzXMGkDzbRv Posg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=F9WSvqQOfeqKzN2mfHECw3MDDtOFEm1amOCByz9riQE=; b=hkShq14TFecvy8RpIf7qUEHX4P0FoVQSrc0rUMQ4pGDxG6yoycM8gOGLleEx+vQiES ButivUADckyTn59UaVhDsTEUFpolTFahZpD6RJ31xNHJAqL2LsBpQASQUiArihYfI/eV UbnKwIenxeNjoVSQgYMCnHYZAZTjMGC3Pxr/osfVXDVr0AVu6qKYl7Vs2KFriHma9j+W hAka4CtNOrFLZ8rdFDZ2/NBh1Fy/MzE7C5NsezG0q+pa6/Q6mwjsJaZ3ILbpqBp0Fova J+wS08W5j1NN0Ie+FZ2Isl56Ti+K8FAYPB8r5krd3qhNQ6G68UAKRqmbFtpQ0JFuLFzJ 76gQ== X-Gm-Message-State: AOAM532vhr/q+jI+os4/OO5KHXqFCXdkus7pDKwZLoAe5MVf3mJ25naA ve506b3ijpTGFBM84pnYsZ2CenerTg== X-Google-Smtp-Source: ABdhPJzUWtZc/ZGxRp4tbOMCu9NgC5hX7NyatmZoJbF1+qyb4Xq+KhMmWovyMdQyBstF4WdRB2SBHxWS9A== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1427:: with SMTP id o7mr427893qvx.45.1632488056707; Fri, 24 Sep 2021 05:54:16 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:36 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-8-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055418_532927_8FC5B2C2 X-CRM114-Status: GOOD ( 23.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Many functions don't need access to the vcpu structure, but only the kvm_cpu_ctxt. Reduce their scope. This applies the semantic patches with the following commands: spatch --sp-file cocci_refactor/add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place spatch --sp-file cocci_refactor/use_ctxt.cocci --dir arch/arm64/kvm/hyp --include-headers --in-place spatch --sp-file cocci_refactor/use_ctxt.cocci --dir arch/arm64/kvm/hyp --include-headers --in-place This patch adds variables that may be unused. These will be removed at the end of this patch series. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/aarch32.c | 18 +++--- arch/arm64/kvm/hyp/exception.c | 60 ++++++++++-------- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 18 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 20 ++++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 31 +++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++ arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 13 ++-- arch/arm64/kvm/hyp/vgic-v3-sr.c | 71 +++++++++++++++------- arch/arm64/kvm/hyp/vhe/switch.c | 7 +++ arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 2 + 10 files changed, 155 insertions(+), 90 deletions(-) diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index f98cbe2626a1..27ebfff023ff 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = { */ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { + const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; int cond; @@ -59,7 +60,7 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) if (cond == 0xE) return true; - cpsr = *vcpu_cpsr(vcpu); + cpsr = *ctxt_cpsr(vcpu_ctxt); if (cond < 0) { /* This can happen in Thumb mode: examine IT state. */ @@ -93,10 +94,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) * * IT[7:0] -> CPSR[26:25],CPSR[15:10] */ -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) +static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) { unsigned long itbits, cond; - unsigned long cpsr = *vcpu_cpsr(vcpu); + unsigned long cpsr = *ctxt_cpsr(vcpu_ctxt); bool is_arm = !(cpsr & PSR_AA32_T_BIT); if (is_arm || !(cpsr & PSR_AA32_IT_MASK)) @@ -116,7 +117,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) cpsr |= cond << 13; cpsr |= (itbits & 0x1c) << (10 - 2); cpsr |= (itbits & 0x3) << 25; - *vcpu_cpsr(vcpu) = cpsr; + *ctxt_cpsr(vcpu_ctxt) = cpsr; } /** @@ -125,16 +126,17 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) */ void kvm_skip_instr32(struct kvm_vcpu *vcpu) { - u32 pc = *vcpu_pc(vcpu); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; - is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT); + is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT); if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu)) pc += 2; else pc += 4; - *vcpu_pc(vcpu) = pc; + *ctxt_pc(vcpu_ctxt) = pc; - kvm_adjust_itstate(vcpu); + kvm_adjust_itstate(vcpu_ctxt); } diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 643c5844f684..e23b9cedb043 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -99,13 +99,14 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from * MSB to LSB. */ -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, +static void enter_exception64(struct kvm_cpu_context *vcpu_ctxt, + unsigned long target_mode, enum exception_type type) { unsigned long sctlr, vbar, old, new, mode; u64 exc_offset; - mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT); + mode = *ctxt_cpsr(vcpu_ctxt) & (PSR_MODE_MASK | PSR_MODE32_BIT); if (mode == target_mode) exc_offset = CURRENT_EL_SP_ELx_VECTOR; @@ -118,18 +119,18 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, switch (target_mode) { case PSR_MODE_EL1h: - vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); - sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); - __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); + vbar = __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1); + sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); + __ctxt_write_sys_reg(vcpu_ctxt, *ctxt_pc(vcpu_ctxt), ELR_EL1); break; default: /* Don't do that */ BUG(); } - *vcpu_pc(vcpu) = vbar + exc_offset + type; + *ctxt_pc(vcpu_ctxt) = vbar + exc_offset + type; - old = *vcpu_cpsr(vcpu); + old = *ctxt_cpsr(vcpu_ctxt); new = 0; new |= (old & PSR_N_BIT); @@ -172,8 +173,8 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, new |= target_mode; - *vcpu_cpsr(vcpu) = new; - __vcpu_write_spsr(vcpu, old); + *ctxt_cpsr(vcpu_ctxt) = new; + __ctxt_write_spsr(vcpu_ctxt, old); } /* @@ -194,12 +195,13 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, * Here we manipulate the fields in order of the AArch32 SPSR_ELx layout, from * MSB to LSB. */ -static unsigned long get_except32_cpsr(struct kvm_vcpu *vcpu, u32 mode) +static unsigned long get_except32_cpsr(struct kvm_cpu_context *vcpu_ctxt, + u32 mode) { - u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); unsigned long old, new; - old = *vcpu_cpsr(vcpu); + old = *ctxt_cpsr(vcpu_ctxt); new = 0; new |= (old & PSR_AA32_N_BIT); @@ -288,27 +290,28 @@ static const u8 return_offsets[8][2] = { [7] = { 4, 4 }, /* FIQ, unused */ }; -static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) +static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, + u32 vect_offset) { - unsigned long spsr = *vcpu_cpsr(vcpu); + unsigned long spsr = *ctxt_cpsr(vcpu_ctxt); bool is_thumb = (spsr & PSR_AA32_T_BIT); - u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); u32 return_address; - *vcpu_cpsr(vcpu) = get_except32_cpsr(vcpu, mode); - return_address = *vcpu_pc(vcpu); + *ctxt_cpsr(vcpu_ctxt) = get_except32_cpsr(vcpu_ctxt, mode); + return_address = *ctxt_pc(vcpu_ctxt); return_address += return_offsets[vect_offset >> 2][is_thumb]; /* KVM only enters the ABT and UND modes, so only deal with those */ switch(mode) { case PSR_AA32_MODE_ABT: - __vcpu_write_spsr_abt(vcpu, host_spsr_to_spsr32(spsr)); - vcpu_gp_regs(vcpu)->compat_lr_abt = return_address; + __ctxt_write_spsr_abt(vcpu_ctxt, host_spsr_to_spsr32(spsr)); + ctxt_gp_regs(vcpu_ctxt)->compat_lr_abt = return_address; break; case PSR_AA32_MODE_UND: - __vcpu_write_spsr_und(vcpu, host_spsr_to_spsr32(spsr)); - vcpu_gp_regs(vcpu)->compat_lr_und = return_address; + __ctxt_write_spsr_und(vcpu_ctxt, host_spsr_to_spsr32(spsr)); + ctxt_gp_regs(vcpu_ctxt)->compat_lr_und = return_address; break; } @@ -316,23 +319,24 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) if (sctlr & (1 << 13)) vect_offset += 0xffff0000; else /* always have security exceptions */ - vect_offset += __vcpu_read_sys_reg(vcpu, VBAR_EL1); + vect_offset += __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1); - *vcpu_pc(vcpu) = vect_offset; + *ctxt_pc(vcpu_ctxt) = vect_offset; } static void kvm_inject_exception(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: - enter_exception32(vcpu, PSR_AA32_MODE_UND, 4); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; case KVM_ARM64_EXCEPT_AA32_IABT: - enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 12); break; case KVM_ARM64_EXCEPT_AA32_DABT: - enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 16); break; default: /* Err... */ @@ -342,7 +346,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): - enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync); + enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, + except_type_sync); break; default: /* @@ -361,6 +366,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) */ void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 4fdfeabefeb4..20dde9dbc11b 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -15,15 +15,16 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) { - if (vcpu_mode_is_32bit(vcpu)) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (ctxt_mode_is_32bit(vcpu_ctxt)) { kvm_skip_instr32(vcpu); } else { - *vcpu_pc(vcpu) += 4; - *vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK; + *ctxt_pc(vcpu_ctxt) += 4; + *ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK; } /* advance the singlestep state machine */ - *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; + *ctxt_cpsr(vcpu_ctxt) &= ~DBG_SPSR_SS; } /* @@ -32,13 +33,14 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) */ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { - *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); - vcpu_gp_regs(vcpu)->pstate = read_sysreg_el2(SYS_SPSR); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); + ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); kvm_skip_instr(vcpu); - write_sysreg_el2(vcpu_gp_regs(vcpu)->pstate, SYS_SPSR); - write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR); + write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR); } /* diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 9fa9cf71eefa..41c553a7b5dd 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -54,14 +54,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - __vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); + ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2) = read_sysreg(fpexc32_el2); } static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point * register accesses to EL2, however, the ARM ARM clearly states that @@ -215,15 +217,17 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu_fp_regs(vcpu)->fpsr); - write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); + &ctxt_fp_regs(vcpu_ctxt)->fpsr); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, ZCR_EL1), SYS_ZCR); } /* Check for an FPSIMD/SVE trap and handle as appropriate */ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); bool sve_guest, sve_host; u8 esr_ec; u64 reg; @@ -276,11 +280,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (sve_guest) __hyp_sve_restore_guest(vcpu); else - __fpsimd_restore_state(vcpu_fp_regs(vcpu)); + __fpsimd_restore_state(ctxt_fp_regs(vcpu_ctxt)); /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), + fpexc32_el2); vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; @@ -289,9 +294,10 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); - u64 val = vcpu_get_reg(vcpu, rt); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); /* * The normal sysreg handling code expects to see the traps, @@ -382,6 +388,7 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; @@ -412,6 +419,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) */ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 9451206f512e..c2668b85b67e 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -158,36 +158,39 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - *vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt); - *vcpu_spsr_und(vcpu) = read_sysreg(spsr_und); - *vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq); - *vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq); + *ctxt_spsr_abt(vcpu_ctxt) = read_sysreg(spsr_abt); + *ctxt_spsr_und(vcpu_ctxt) = read_sysreg(spsr_und); + *ctxt_spsr_irq(vcpu_ctxt) = read_sysreg(spsr_irq); + *ctxt_spsr_fiq(vcpu_ctxt) = read_sysreg(spsr_fiq); - __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); - __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); + ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); + ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); + ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt); - write_sysreg(*vcpu_spsr_und(vcpu), spsr_und); - write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq); - write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq); + write_sysreg(*ctxt_spsr_abt(vcpu_ctxt), spsr_abt); + write_sysreg(*ctxt_spsr_und(vcpu_ctxt), spsr_und); + write_sysreg(*ctxt_spsr_irq(vcpu_ctxt), spsr_irq); + write_sysreg(*ctxt_spsr_fiq(vcpu_ctxt), spsr_fiq); - write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); - write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), + dbgvcr32_el2); } #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 9296d7108f93..d5780acab6c2 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -36,6 +36,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu); @@ -68,6 +69,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char __kvm_hyp_host_vector[]; u64 mdcr_el2, cptr; @@ -168,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -267,9 +270,11 @@ void __noreturn hyp_panic(void) u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu) { __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 87a54375bd6e..8dbc39026cc5 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -15,9 +15,9 @@ #include #include -static bool __is_be(struct kvm_vcpu *vcpu) +static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) { - if (vcpu_mode_is_32bit(vcpu)) + if (ctxt_mode_is_32bit(vcpu_ctxt)) return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE); @@ -36,6 +36,7 @@ static bool __is_be(struct kvm_vcpu *vcpu) */ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; phys_addr_t fault_ipa; @@ -68,19 +69,19 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) addr += fault_ipa - vgic->vgic_cpu_base; if (kvm_vcpu_dabt_iswrite(vcpu)) { - u32 data = vcpu_get_reg(vcpu, rd); - if (__is_be(vcpu)) { + u32 data = ctxt_get_reg(vcpu_ctxt, rd); + if (__is_be(vcpu_ctxt)) { /* guest pre-swabbed data, undo this for writel() */ data = __kvm_swab32(data); } writel_relaxed(data, addr); } else { u32 data = readl_relaxed(addr); - if (__is_be(vcpu)) { + if (__is_be(vcpu_ctxt)) { /* guest expects swabbed data */ data = __kvm_swab32(data); } - vcpu_set_reg(vcpu, rd, data); + ctxt_set_reg(vcpu_ctxt, rd, data); } __kvm_skip_instr(vcpu); diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 39f8f7f9227c..bdb03b8e50ab 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void) static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 esr = kvm_vcpu_get_esr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -673,6 +674,7 @@ static int __vgic_v3_clear_highest_active_priority(void) static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; int lr, grp; @@ -700,11 +702,11 @@ static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) lr_val |= ICH_LR_ACTIVE_BIT; __gic_v3_set_lr(lr_val, lr); __vgic_v3_set_active_priority(lr_prio, vmcr, grp); - vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); + ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); return; spurious: - vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS); + ctxt_set_reg(vcpu_ctxt, rt, ICC_IAR1_EL1_SPURIOUS); } static void __vgic_v3_clear_active_lr(int lr, u64 lr_val) @@ -731,7 +733,8 @@ static void __vgic_v3_bump_eoicount(void) static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 vid = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; int lr; @@ -754,7 +757,8 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 vid = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; u8 lr_prio, act_prio; int lr, grp; @@ -791,17 +795,20 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) vmcr |= ICH_VMCR_ENG0_MASK; @@ -813,7 +820,8 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) vmcr |= ICH_VMCR_ENG1_MASK; @@ -825,17 +833,20 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; /* Enforce BPR limiting */ @@ -852,7 +863,8 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); if (vmcr & ICH_VMCR_CBPR_MASK) @@ -872,6 +884,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; if (!__vgic_v3_get_group(vcpu)) @@ -879,12 +892,13 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) else val = __vgic_v3_read_ap1rn(n); - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (!__vgic_v3_get_group(vcpu)) __vgic_v3_write_ap0rn(val, n); @@ -895,47 +909,56 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 0); } static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 1); } static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 2); } static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 3); } static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 0); } static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 1); } static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 2); } static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 3); } static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; @@ -950,19 +973,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) lr_val = ICC_IAR1_EL1_SPURIOUS; spurious: - vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); + ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; - vcpu_set_reg(vcpu, rt, vmcr); + ctxt_set_reg(vcpu_ctxt, rt, vmcr); } static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); val <<= ICH_VMCR_PMR_SHIFT; val &= ICH_VMCR_PMR_MASK; @@ -974,12 +999,14 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; vtr = read_gicreg(ICH_VTR_EL2); @@ -996,12 +1023,13 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) /* CBPR */ val |= (vmcr & ICH_VMCR_CBPR_MASK) >> ICH_VMCR_CBPR_SHIFT; - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & ICC_CTLR_EL1_CBPR_MASK) vmcr |= ICH_VMCR_CBPR_MASK; @@ -1018,6 +1046,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; @@ -1026,7 +1055,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) u32 sysreg; esr = kvm_vcpu_get_esr(vcpu); - if (vcpu_mode_is_32bit(vcpu)) { + if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu); return 1; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index b3229924d243..c2e443202f8e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -33,6 +33,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu); @@ -68,6 +69,7 @@ NOKPROBE_SYMBOL(__activate_traps); static void __deactivate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char vectors[]; /* kernel exception vectors */ ___deactivate_traps(vcpu); @@ -88,6 +90,7 @@ NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __activate_traps_common(vcpu); } @@ -107,6 +110,7 @@ void deactivate_traps_vhe_put(void) /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -160,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; local_daif_mask(); @@ -197,9 +202,11 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_ctxt = &vcpu_ctxt(vcpu); __deactivate_traps(vcpu); sysreg_restore_host_state_vhe(host_ctxt); diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 2a0b8c88d74f..37f56b4743d0 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); */ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -97,6 +98,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E47C7C433F5 for ; Fri, 24 Sep 2021 12:58:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B61C861265 for ; Fri, 24 Sep 2021 12:58:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B61C861265 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=G/8xYVN2IjFuPXx+bnDFv7G/WZA5gpmvXqREYP3fXaM=; b=j7i9MmLjqfZZ7AxQwa8Wzj3Mif Ytps4eug2k+L5AslqNeIoCXNmm3co7FCiErSjdrNxx5cji5yK4SXthKrHrpTW6NEpGsuQzxWeTjpp G9ruAG9atO8fBOw6ra1Ho8I5T0WZxxhjL7X4FtEyKVG7s3DP3yAqY7SHlNBgRT+qLSEsLxeaGpxbr U77Ha6VjZ0Vo0BZNOdR1GtFlPCC3jAXHzaIuh14H3yFIzN6F6kP9odIxhljBXeyV4V13y7m0PUKrl 5I4qClxbIiofGMZDzsXeuGBzj8jyecwqQwwT4C+Pl9IgGMVf5gh5vqFKDuQRoCvmlnCd1n2TB293O 7vmyH9iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTklU-00ENKN-AM; Fri, 24 Sep 2021 12:57:20 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkib-00EM4l-0T for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:22 +0000 Received: by mail-wr1-x44a.google.com with SMTP id m18-20020adfe952000000b0015b0aa32fd6so7986368wrn.12 for ; Fri, 24 Sep 2021 05:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gAyjNp3rW7TPfI7//eqCH4vOMM/PWKliQsJ25oNVe/Q=; b=Mtapfc+tk4ahVVDoTy22nyqa+M+lz+QDn+kzdiAfRLfPa2zdGHqMbcoY8BzxnnT4Ce cWum2sC4qzxNiAgDgN71HY0fc+KrGPIdKsmx7vvfZMJZsyzXStHuKXPDLgBEQBtzZo1g nPsZmTTFPMZ14TmrBHdBoW8H21JEKZa6+PIstkjoUe2sPE8VjIIt/1QrqI1Fyry3T3so Y5Ktxzq+xh00yhwaDrMW1n9KcfbArC+SN8UsAc9PskULI7RXut4Jq6alNNq4sbo4IrwU NgwauBKAv5Xu2CQoPPkiCDedlkfh/ESAHLGwHqOMNyFIAj6sIFg5yUH+TRCyNYyRwJaO EGfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gAyjNp3rW7TPfI7//eqCH4vOMM/PWKliQsJ25oNVe/Q=; b=q1GqN8l8/waHeSXX4ZX8BM8jkkqQPFqmEVVyvvLqPVZEoSEYeDX5GL8GAY74zrIuuc sPshkU8vzmTLhwoQS44VyqNZNVCSLZmg/j0MTnrBJqyHJ2H2m7USgwVP1ZLLa7gWIUHD ReMP67ZBU5XjpRtkbRfsoMoQXvc/bVl7T+T08bg9CcU5oB35TclpyAuuOv/VbiF6XHop nsMQwp0F4RAq6k1PeyJgZtwBsvXTY5ibveE9zz3QcGvSfBPa3OYg9WkK9m6MhvKCOj5N aZ/8Vnr4b2U+UMCAo4JRF7Pif6kWgj+XLWpgIOi4nbJsvBnGfV3WmzG9ZtukDk/VouxY EZ3A== X-Gm-Message-State: AOAM530G4hAF5unRJezU1UjPKte8oyZ6m+d30ucgOk7Pwuvy0SbemyEC lTg2ZwMLdqNUmQVoRGfgcXmZ/Ywl0w== X-Google-Smtp-Source: ABdhPJxVUcAxVdEdzqLvk3tG7+vsf8Jt1WEATen5QyS1LkA6oMHpdXD0zU5b5uRnwhULCsb+glF03ab0xA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:4642:: with SMTP id n2mr1902362wmo.39.1632488058999; Fri, 24 Sep 2021 05:54:18 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:37 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-9-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055421_081361_A74F1FD9 X-CRM114-Status: GOOD ( 10.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Part of the state in vcpu_arch is hypervisor-specific. To isolate that state in future patches, start by creating accessors for this state rather than by dereferencing vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 097e5f533af9..280ee23dfc5a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -373,6 +373,13 @@ struct kvm_vcpu_arch { } steal; }; +/* Accessors for vcpu parameters related to the hypervistor state. */ +#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2 +#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2 +#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2 +#define vcpu_fault(vcpu) (vcpu)->arch.fault +#define vcpu_flags(vcpu) (vcpu)->arch.flags + /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ sve_ffr_offset((vcpu)->arch.sve_max_vl)) From patchwork Fri Sep 24 12:53:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7E6FC433EF for ; Fri, 24 Sep 2021 12:59:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 813F1613AD for ; Fri, 24 Sep 2021 12:59:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 813F1613AD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xjssFIj9avmowdEMpnNeg/pVe4eicWPchH/nXScb2NY=; b=t7gQu6B48ieXq4DDKWfWy0ppeY LLfBgecYZ0cFzMTaN/wA0iRTvwrFlM0URSOnoEdpzdJtXsVgs8CNlcaYkHwrug82fWYfx6/bkD+TZ iEU9HxwLjPYxbc4e4eHiq8FMajgMVUkk7Zg/fxisUD69TOYjEOEmqzELVB3fLQ+zsT+3sKD1umdfE S+UeRnQI7ozI+/4V6Fyco1vZRgAAT3QSP5L20FgstTioIe7x7LvZ6NzsQ/aktIfMfFOrutVOmHG5J hBOoB7p50exPDntNYZnTh7ETlbOp5NEFRVluUwu5OWJziSBtfBv6F38SgwPkjp3c10z654ZY6UTFJ K5VrJmgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkm2-00ENdd-VJ; Fri, 24 Sep 2021 12:57:55 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkic-00EM5f-CH for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:25 +0000 Received: by mail-qt1-x849.google.com with SMTP id d20-20020ac81194000000b002a53ffbd04dso28857678qtj.12 for ; Fri, 24 Sep 2021 05:54:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lLmYOVVwQKylUxYkkgcoQlIWOLYAg3wcffUYq5oEXuI=; b=kSDphT8+qk55V3xuhrjRp8h5TfZj8CnkbcVSNA8w4aeBCDoeMdnQg+fEymrm8an3xI JewOYNffyi4CuCvMlyKo3p+hem8ueUED7VVUID334/B8Jki+1iiytVTP8WNX0kYNLQHx DJMnE6L81oanerU6qdFbCV5yd278p4yl7gPATCgpFV0QXoKq3Ymewem9kh6AEJ64F4Lf cnnFxYNRkWqtu4cEfN95oB0Nzb+pdAr5SQ+yigDfG3ukatKznd+LIitIlS6vx7ngn5E7 sy2bqidVOpaWWXDmV4+TbfVqbQM2Q88Rp3iY7kzSWYDHV+hSbtXMEmFM5XJZnOMDqVyj I7/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lLmYOVVwQKylUxYkkgcoQlIWOLYAg3wcffUYq5oEXuI=; b=xG+jt6qFoUze9eLVe0emx6ndk2dwK2wC3LYord0CjeV5zqDNvn1Smng/x4cLxFUU61 rJm71ezlhAJnAuY4WfCaEItOiuUGGfUT+yx6wI0JXDNXwTJAzGfJWJac2VDvaxT4wPJb ju0+R9VP+8BOtG6k4k01TyGAxQwhQAAvW9hpvLQiLzcEt/Drcc/LzMRKBrbbesOaqCpc cU3dgHDQsWRtCq3oWXmHiVstQJ1X6HP0tWotRvAzGLJj8fCpscTxFGbFAqJvIhph7hwS oCj4CjsN5/m8YjHtOuXRRh/1s3tPf+tp2ufJfOzb8fBMJu1AyDyCc7sjSUyW2aCZcpab nylw== X-Gm-Message-State: AOAM531MOBrCB+hfbivr0dv60PqlpVE+tqtwp8mDN/M4P2EoPn/zlaYq 6ubY2wWAC0m4nF/SDcbHSycnCRSLbg== X-Google-Smtp-Source: ABdhPJwGkJ9wje6r9ZhTac8v8zFhECkCuGAOQ8vY17+yczUgDJXFv6d6aj6qxPJ9dKyb1gz9iQSd80BDxQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:e68:: with SMTP id jz8mr1703156qvb.21.1632488061042; Fri, 24 Sep 2021 05:54:21 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:38 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-10-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055422_504722_E5ADB393 X-CRM114-Status: GOOD ( 22.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To simplify future refactoring, ensure that all access to the hypervisor state related fields in vcpu use the accessors created previously in this patch series, rather than by dereferencing the vcpu directly. The semantic patch is applied with the following command: spatch --sp-file cocci_refactor/vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 52 +++++++++++----------- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/debug.c | 28 ++++++------ arch/arm64/kvm/fpsimd.c | 20 ++++----- arch/arm64/kvm/guest.c | 2 +- arch/arm64/kvm/handle_exit.c | 2 +- arch/arm64/kvm/hyp/exception.c | 12 ++--- arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 6 +-- arch/arm64/kvm/hyp/include/hyp/switch.h | 32 ++++++------- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 4 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 8 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- arch/arm64/kvm/inject_fault.c | 10 ++--- arch/arm64/kvm/reset.c | 6 +-- arch/arm64/kvm/sys_regs.c | 4 +- 16 files changed, 97 insertions(+), 97 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index ad6e53cef1a4..7d09a9356d89 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -43,23 +43,23 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu->arch.hcr_el2 & HCR_RW); + return !(vcpu_hcr_el2(vcpu) & HCR_RW); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; + vcpu_hcr_el2(vcpu) = HCR_GUEST_FLAGS; if (is_kernel_in_hyp_mode()) - vcpu->arch.hcr_el2 |= HCR_E2H; + vcpu_hcr_el2(vcpu) |= HCR_E2H; if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) { /* route synchronous external abort exceptions to EL2 */ - vcpu->arch.hcr_el2 |= HCR_TEA; + vcpu_hcr_el2(vcpu) |= HCR_TEA; /* trap error record accesses */ - vcpu->arch.hcr_el2 |= HCR_TERR; + vcpu_hcr_el2(vcpu) |= HCR_TERR; } if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { - vcpu->arch.hcr_el2 |= HCR_FWB; + vcpu_hcr_el2(vcpu) |= HCR_FWB; } else { /* * For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C @@ -67,11 +67,11 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) * MMU gets turned on and do the necessary cache maintenance * then. */ - vcpu->arch.hcr_el2 |= HCR_TVM; + vcpu_hcr_el2(vcpu) |= HCR_TVM; } if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) - vcpu->arch.hcr_el2 &= ~HCR_RW; + vcpu_hcr_el2(vcpu) &= ~HCR_RW; /* * TID3: trap feature register accesses that we virtualise. @@ -79,52 +79,52 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) * are currently virtualised. */ if (!vcpu_el1_is_32bit(vcpu)) - vcpu->arch.hcr_el2 |= HCR_TID3; + vcpu_hcr_el2(vcpu) |= HCR_TID3; if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || vcpu_el1_is_32bit(vcpu)) - vcpu->arch.hcr_el2 |= HCR_TID2; + vcpu_hcr_el2(vcpu) |= HCR_TID2; } static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu->arch.hcr_el2; + return (unsigned long *)&vcpu_hcr_el2(vcpu); } static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 &= ~HCR_TWE; + vcpu_hcr_el2(vcpu) &= ~HCR_TWE; if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || vcpu->kvm->arch.vgic.nassgireq) - vcpu->arch.hcr_el2 &= ~HCR_TWI; - else - vcpu->arch.hcr_el2 |= HCR_TWI; + vcpu_hcr_el2(vcpu) &= ~HCR_TWI; + else + vcpu_hcr_el2(vcpu) |= HCR_TWI; } static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 |= HCR_TWE; - vcpu->arch.hcr_el2 |= HCR_TWI; + vcpu_hcr_el2(vcpu) |= HCR_TWE; + vcpu_hcr_el2(vcpu) |= HCR_TWI; } static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); + vcpu_hcr_el2(vcpu) |= (HCR_API | HCR_APK); } static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); + vcpu_hcr_el2(vcpu) &= ~(HCR_API | HCR_APK); } static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) { - return vcpu->arch.vsesr_el2; + return vcpu_vsesr_el2(vcpu); } static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) { - vcpu->arch.vsesr_el2 = vsesr; + vcpu_vsesr_el2(vcpu) = vsesr; } static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt) @@ -254,7 +254,7 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.esr_el2; + return vcpu_fault(vcpu).esr_el2; } static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) @@ -269,17 +269,17 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.far_el2; + return vcpu_fault(vcpu).far_el2; } static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu) { - return ((phys_addr_t)vcpu->arch.fault.hpfar_el2 & HPFAR_MASK) << 8; + return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8; } static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.disr_el1; + return vcpu_fault(vcpu).disr_el1; } static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) @@ -493,7 +493,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu) { - vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + vcpu_flags(vcpu) |= KVM_ARM64_INCREMENT_PC; } static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e720148232a0..5f0e2f9821ec 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -907,7 +907,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * the vcpu state. Note that this relies on __kvm_adjust_pc() * being preempt-safe on VHE. */ - if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION | + if (unlikely(vcpu_flags(vcpu) & (KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC))) kvm_call_hyp(__kvm_adjust_pc, vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index d5e79d7ee6e9..e7a5956fe648 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -87,8 +87,8 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; - vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | + vcpu_mdcr_el2(vcpu) = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; + vcpu_mdcr_el2(vcpu) |= (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | @@ -98,7 +98,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ - vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE; + vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDE; /* * Trap debug register access when one of the following is true: @@ -107,10 +107,10 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) * - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear). */ if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) || - !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA; + !(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) + vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDA; - trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2); + trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu_mdcr_el2(vcpu)); } /** @@ -154,7 +154,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) { - unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2; + unsigned long mdscr, orig_mdcr_el2 = vcpu_mdcr_el2(vcpu); trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug); @@ -214,7 +214,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1); vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state; - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; trace_kvm_arm_set_regset("BKPTS", get_num_brps(), &vcpu->arch.debug_ptr->dbg_bcr[0], @@ -231,11 +231,11 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) /* If KDE or MDE are set, perform a full save/restore cycle. */ if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; /* Write mdcr_el2 changes since vcpu_load on VHE systems */ - if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2) - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (has_vhe() && orig_mdcr_el2 != vcpu_mdcr_el2(vcpu)) + write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1)); } @@ -280,16 +280,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) && !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; /* Check if we have TRBE implemented and available at the host */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) && !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; } void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu) { - vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | + vcpu_flags(vcpu) &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | KVM_ARM64_DEBUG_STATE_SAVE_TRBE); } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index db135588236a..1871a267e2ed 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -74,16 +74,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { BUG_ON(!current->mm); - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | - KVM_ARM64_HOST_SVE_IN_USE | - KVM_ARM64_HOST_SVE_ENABLED); - vcpu->arch.flags |= KVM_ARM64_FP_HOST; + vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_HOST_SVE_IN_USE | + KVM_ARM64_HOST_SVE_ENABLED); + vcpu_flags(vcpu) |= KVM_ARM64_FP_HOST; if (test_thread_flag(TIF_SVE)) - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE; + vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_IN_USE; if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED; + vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_ENABLED; } /* @@ -96,7 +96,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(!irqs_disabled()); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) { fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu), vcpu->arch.sve_state, vcpu->arch.sve_max_vl); @@ -120,7 +120,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) local_irq_save(flags); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) { if (guest_has_sve) { __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); @@ -139,14 +139,14 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) * for EL0. To avoid spurious traps, restore the trap state * seen by kvm_arch_vcpu_load_fp(): */ - if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_ENABLED) sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN); else sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } update_thread_flag(TIF_SVE, - vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE); + vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE); local_irq_restore(flags); } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index c4429307a164..fc63e55db2f0 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -782,7 +782,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { - events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); + events->exception.serror_pending = !!(vcpu_hcr_el2(vcpu) & HCR_VSE); events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); if (events->exception.serror_pending && events->exception.serror_has_esr) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 6f48336b1d86..22e9f03fe901 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -126,7 +126,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu) switch (ESR_ELx_EC(esr)) { case ESR_ELx_EC_WATCHPT_LOW: - run->debug.arch.far = vcpu->arch.fault.far_el2; + run->debug.arch.far = vcpu_fault(vcpu).far_el2; fallthrough; case ESR_ELx_EC_SOFTSTP_LOW: case ESR_ELx_EC_BREAKPT_LOW: diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index e23b9cedb043..4514e345c26f 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -328,7 +328,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { - switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { + switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; @@ -343,7 +343,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) break; } } else { - switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { + switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, @@ -367,12 +367,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { + if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); - vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); - } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) { + } else if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) { kvm_skip_instr(vcpu); - vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC; + vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC; } } diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h index 4ebe9f558f3a..55735782d7e3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h @@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; @@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; @@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) __debug_save_state(guest_dbg, guest_ctxt); __debug_restore_state(host_dbg, host_ctxt); - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) &= ~KVM_ARM64_DEBUG_DIRTY; } #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 41c553a7b5dd..370a8fb827be 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -45,10 +45,10 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) */ if (!system_supports_fpsimd() || vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST); - return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); + return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED); } /* Save the 32-bit only FPSIMD system register state */ @@ -94,7 +94,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); } static inline void __deactivate_traps_common(void) @@ -106,7 +106,7 @@ static inline void __deactivate_traps_common(void) static inline void ___activate_traps(struct kvm_vcpu *vcpu) { - u64 hcr = vcpu->arch.hcr_el2; + u64 hcr = vcpu_hcr_el2(vcpu); if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) hcr |= HCR_TVM; @@ -114,7 +114,7 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg(hcr, hcr_el2); if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); + write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2); } static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) @@ -125,9 +125,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) * the crucial bit is "On taking a vSError interrupt, * HCR_EL2.VSE is cleared to 0." */ - if (vcpu->arch.hcr_el2 & HCR_VSE) { - vcpu->arch.hcr_el2 &= ~HCR_VSE; - vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; + if (vcpu_hcr_el2(vcpu) & HCR_VSE) { + vcpu_hcr_el2(vcpu) &= ~HCR_VSE; + vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE; } } @@ -196,13 +196,13 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) u8 ec; u64 esr; - esr = vcpu->arch.fault.esr_el2; + esr = vcpu_fault(vcpu).esr_el2; ec = ESR_ELx_EC(esr); if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) return true; - return __get_fault_info(esr, &vcpu->arch.fault); + return __get_fault_info(esr, &vcpu_fault(vcpu)); } static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) @@ -237,7 +237,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (system_supports_sve()) { sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE; } else { sve_guest = false; sve_host = false; @@ -268,13 +268,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) } isb(); - if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) { if (sve_host) __hyp_sve_save_host(vcpu); else __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST; } if (sve_guest) @@ -287,7 +287,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), fpexc32_el2); - vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED; return true; } @@ -303,7 +303,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) * The normal sysreg handling code expects to see the traps, * let's not do anything here. */ - if (vcpu->arch.hcr_el2 & HCR_TVM) + if (vcpu_hcr_el2(vcpu) & HCR_TVM) return false; switch (sysreg) { @@ -421,7 +421,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); + vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR); if (ARM_SERROR_PENDING(*exit_code)) { u8 esr_ec = kvm_vcpu_trap_get_class(vcpu); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index c2668b85b67e..d49985e825cd 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -170,7 +170,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } @@ -188,7 +188,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), dbgvcr32_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..934737478d64 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1) void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE) __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); /* Disable and flush Self-Hosted Trace generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1); } @@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) { - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE) __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d5780acab6c2..ac7529305717 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)) + if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)) cptr |= CPTR_EL2_TZ; write_sysreg(cptr, cptr_el2); @@ -241,7 +241,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(host_ctxt); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index c2e443202f8e..0113d442bc95 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -153,7 +153,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_restore_host_state_vhe(host_ctxt); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index b47df73e98d7..867e8856bdcd 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -20,7 +20,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr bool is_aarch32 = vcpu_mode_is_32bit(vcpu); u32 esr = 0; - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1 | KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_PENDING_EXCEPTION); @@ -52,7 +52,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) { u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1 | KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_PENDING_EXCEPTION); @@ -73,7 +73,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) static void inject_undef32(struct kvm_vcpu *vcpu) { - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_UND | KVM_ARM64_PENDING_EXCEPTION); } @@ -97,13 +97,13 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr) far = vcpu_read_sys_reg(vcpu, FAR_EL1); if (is_pabt) { - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_IABT | KVM_ARM64_PENDING_EXCEPTION); far &= GENMASK(31, 0); far |= (u64)addr << 32; vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2); } else { /* !iabt */ - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_DABT | KVM_ARM64_PENDING_EXCEPTION); far &= GENMASK(63, 32); far |= addr; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ab1ef5313a3e..f94b5b07d2cf 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * KVM_REG_ARM64_SVE_VLS. Allocation is deferred until * kvm_arm_vcpu_finalize(), which freezes the configuration. */ - vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; + vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_SVE; return 0; } @@ -111,7 +111,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) return -ENOMEM; vcpu->arch.sve_state = buf; - vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; + vcpu_flags(vcpu) |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; } @@ -162,7 +162,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) !system_has_full_ptr_auth()) return -EINVAL; - vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_PTRAUTH; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1a7968ad078c..8fb57e83e9ec 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -348,7 +348,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu, { if (p->is_write) { vcpu_write_sys_reg(vcpu, p->regval, r->reg); - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; } else { p->regval = vcpu_read_sys_reg(vcpu, r->reg); } @@ -381,7 +381,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu, val |= (p->regval & (mask >> shift)) << shift; *dbg_reg = val; - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; } static void dbg_to_reg(struct kvm_vcpu *vcpu, From patchwork Fri Sep 24 12:53:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43534C433FE for ; Fri, 24 Sep 2021 13:00:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 134F261288 for ; Fri, 24 Sep 2021 13:00:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 134F261288 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pMtFLkl9EqZLf82x/3omXxBBFFRrPs98rr/1k3Sm360=; b=fyhNRFaGIBYI4nSuEoofdYEA9H qhR0LHHKGcPgQleK2Ufon9SjCFDvF4H+CTCXQHj5pBxzfSvAl42ysO1X9DPpk4tudlez7C3P4kvpS Qer884sOyP3gfacl94UBnJId57mXnXOWZ5tQllEKKtyN8bP+/mpDisWNvdl0Q3ei8b8NQEqNiXbH/ IUe6Kaxz5+yGiX8xxn0cKmRN0v94oYZwJ0ELZKmK+2CJeZU5VAW8Get3VSRNLHlySK4yLhnSzlFqu JvRI7We/r5SqLTtJyllRVYQXqPStjmIlQeUem9bBSkKx59/PS9hq9nYPW6dI2WzbCmIVuvg/4bduz 5Jpg7IgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkmZ-00ENuV-KS; Fri, 24 Sep 2021 12:58:28 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkie-00EM6I-VT for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:27 +0000 Received: by mail-qv1-xf49.google.com with SMTP id p75-20020a0c90d1000000b0037efc8547d4so30123916qvp.16 for ; Fri, 24 Sep 2021 05:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q3Qv5RrzIkzZKkPoa+t7yuRspw6lMoesSpU7KdXvcDY=; b=eyhYeIi0Yc+OpQ4WMWB6cehoP4mjxRW0OT/gINGnW1NX3nKXLh6PloH1z7YXRhLRXD peezCPobScU708l00p9f5m1T/VHFeVITDCk9WMh1QSAlfuJK+hrFj7DW3hxAuc46fN2Z F0kdWLjOE7agAn61bsssc7wGH5Qh4UWbFVUVFIZSbb7sZlE9xdGK0ta7jvigzphojt91 Jln4/X0wiH2ePbGsLP+kpJpWcswJzPsJS4Gbx3RnsC+83ctbyj+FXU+ucwZ5bY3f8pPw 4jaOO/FpP64PazSutjAhS86CC3QUmA6kgPtAYkIFkwb5jSW4fMZWarD6aA5DcqW/3Niu yYLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q3Qv5RrzIkzZKkPoa+t7yuRspw6lMoesSpU7KdXvcDY=; b=qhGM0HyRBwyYrYXoCWkMiFCTWpLv4OgJpt0gkfuhaZwoQnaDMLIVKjTy2pPqzGyO+w uhPCNf00nuFShnyDLkiCc2YG6JPr+KkzCHbWj8LjRh3Aw4n0CMoLNRlM7+19FcbfuVVI xpH1GPZJd4JLz90Og5kepvXfIbahIX45V8lAmEFHtswx89RSWNyWFjOw5RTvB8lUddz0 1kD77wPV2oKmRC0RSJOvHfTwreEW3vvz6Vm60cBd6nUdJ5f6btKIymFTAj8T3NboZGWH UTd7w8WvluH4Ocu4mDlduUAF2a63CuhUZ985h+yWA6riIZu5GStDSS/+uGOCdasS7pOA L1Hw== X-Gm-Message-State: AOAM533VIu6WKckDSahYqTDbwhYpoJ9EIKuZI3gylyrLTz0KE8REbEcw 05x2TGIYJnhAh0brBi5vu4kEyqUeSA== X-Google-Smtp-Source: ABdhPJy8hgrd2LxFyAX4SBH6cSNpiZRm1qTOTY9PtiLzl1pzwOaYr+Q+gCysP0BQDCTLObyPCMlvgepQPg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1430:: with SMTP id o16mr9792074qvx.66.1632488063188; Fri, 24 Sep 2021 05:54:23 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:39 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-11-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055425_099135_DC011C75 X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some of the members of vcpu_arch represent state that belongs to the hypervisor. Future patches will factor these out into their own structure. To simplify the refactoring and make it easier to read, add accessors for the members of kvm_vcpu_arch that represent the hypervisor state. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 182 ++++++++++++++++++++++----- arch/arm64/include/asm/kvm_host.h | 38 ++++-- 2 files changed, 181 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 7d09a9356d89..e095afeecd10 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -41,9 +41,14 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); +static __always_inline bool hyp_state_el1_is_32bit(struct vcpu_hyp_state *vcpu_hyps) +{ + return !(hyp_state_hcr_el2(vcpu_hyps) & HCR_RW); +} + static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu_hcr_el2(vcpu) & HCR_RW); + return hyp_state_el1_is_32bit(&hyp_state(vcpu)); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) @@ -252,14 +257,19 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) return mode != PSR_MODE_EL0t; } +static __always_inline u32 kvm_hyp_state_get_esr(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).esr_el2; +} + static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).esr_el2; + return kvm_hyp_state_get_esr(&hyp_state(vcpu)); } -static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_hyp_state_get_condition(const struct vcpu_hyp_state *vcpu_hyps) { - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); if (esr & ESR_ELx_CV) return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT; @@ -267,111 +277,216 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) return -1; } +static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_get_condition(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_get_hfar(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).far_el2; +} + static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).far_el2; + return kvm_hyp_state_get_hfar(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_get_fault_ipa(const struct vcpu_hyp_state *vcpu_hyps) +{ + return ((phys_addr_t) hyp_state_fault(vcpu_hyps).hpfar_el2 & HPFAR_MASK) << 8; } static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu) { - return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8; + return kvm_hyp_state_get_fault_ipa(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_get_disr(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).disr_el1; } static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).disr_el1; + return kvm_hyp_state_get_disr(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_get_imm(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_xVC_IMM_MASK; } static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK; + return kvm_hyp_state_get_imm(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_dabt_isvalid(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_ISV); } static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV); + return kvm_hyp_state_dabt_isvalid(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_iss_nisv_sanitized(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); } static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); + return kvm_hyp_state_iss_nisv_sanitized(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_issext(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SSE); } static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE); + return kvm_hyp_state_issext(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_issf(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SF); } static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF); + return kvm_hyp_state_issf(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_dabt_get_rd(const struct vcpu_hyp_state *vcpu_hyps) +{ + return (kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; } static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu) { - return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; + return kvm_hyp_state_dabt_get_rd(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_abt_iss1tw(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_S1PTW); } static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW); + return kvm_hyp_state_abt_iss1tw(&hyp_state(vcpu)); } /* Always check for S1PTW *before* using this. */ +static __always_inline u32 kvm_hyp_state_dabt_iswrite(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_WNR; +} + static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR; + return kvm_hyp_state_dabt_iswrite(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_dabt_is_cm(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_CM); } static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM); + return kvm_hyp_state_dabt_is_cm(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_dabt_get_as(const struct vcpu_hyp_state *vcpu_hyps) +{ + return 1 << ((kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); } static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) { - return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); + return kvm_hyp_state_dabt_get_as(&hyp_state(vcpu)); } /* This one is not specific to Data Abort */ +static __always_inline u32 kvm_hyp_state_trap_il_is32bit(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_IL); +} + static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL); + return kvm_hyp_state_trap_il_is32bit(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_class(const struct vcpu_hyp_state *vcpu_hyps) +{ + return ESR_ELx_EC(kvm_hyp_state_get_esr(vcpu_hyps)); } static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu) { - return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); + return kvm_hyp_state_trap_get_class(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_is_iabt(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_trap_get_class(vcpu_hyps) == ESR_ELx_EC_IABT_LOW; } static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW; + return kvm_hyp_state_trap_is_iabt(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_is_exec_fault(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_trap_is_iabt(vcpu_hyps) && !kvm_hyp_state_abt_iss1tw(vcpu_hyps); } static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); + return kvm_hyp_state_trap_is_exec_fault(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC; } static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; + return kvm_hyp_state_trap_get_fault(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault_type(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_TYPE; } static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE; + return kvm_hyp_state_trap_get_fault_type(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault_level(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_LEVEL; } static __always_inline u8 kvm_vcpu_trap_get_fault_level(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_LEVEL; + return kvm_hyp_state_trap_get_fault_level(&hyp_state(vcpu)); } -static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_hyp_state_abt_issea(const struct vcpu_hyp_state *vcpu_hyps) { - switch (kvm_vcpu_trap_get_fault(vcpu)) { + switch (kvm_hyp_state_trap_get_fault(vcpu_hyps)) { case FSC_SEA: case FSC_SEA_TTW0: case FSC_SEA_TTW1: @@ -388,12 +503,23 @@ static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) } } -static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_abt_issea(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_sys_get_rt(const struct vcpu_hyp_state *vcpu_hyps) { - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); return ESR_ELx_SYS64_ISS_RT(esr); } + +static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_sys_get_rt(&hyp_state(vcpu)); +} + static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) { if (kvm_vcpu_abt_iss1tw(vcpu)) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 280ee23dfc5a..3e5c173d2360 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -373,12 +373,21 @@ struct kvm_vcpu_arch { } steal; }; +#define hyp_state(vcpu) ((vcpu)->arch) + +/* Accessors for hyp_state parameters related to the hypervistor state. */ +#define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2 +#define hyp_state_mdcr_el2(hyps) (hyps)->mdcr_el2 +#define hyp_state_vsesr_el2(hyps) (hyps)->vsesr_el2 +#define hyp_state_fault(hyps) (hyps)->fault +#define hyp_state_flags(hyps) (hyps)->flags + /* Accessors for vcpu parameters related to the hypervistor state. */ -#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2 -#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2 -#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2 -#define vcpu_fault(vcpu) (vcpu)->arch.fault -#define vcpu_flags(vcpu) (vcpu)->arch.flags +#define vcpu_hcr_el2(vcpu) hyp_state_hcr_el2(&hyp_state(vcpu)) +#define vcpu_mdcr_el2(vcpu) hyp_state_mdcr_el2(&hyp_state(vcpu)) +#define vcpu_vsesr_el2(vcpu) hyp_state_vsesr_el2(&hyp_state(vcpu)) +#define vcpu_fault(vcpu) hyp_state_fault(&hyp_state(vcpu)) +#define vcpu_flags(vcpu) hyp_state_flags(&hyp_state(vcpu)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -441,18 +450,22 @@ struct kvm_vcpu_arch { */ #define KVM_ARM64_INCREMENT_PC (1 << 9) /* Increment PC */ -#define vcpu_has_sve(vcpu) (system_supports_sve() && \ - ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) +#define hyp_state_has_sve(hyps) (system_supports_sve() && \ + (hyp_state_flags((hyps)) & KVM_ARM64_GUEST_HAS_SVE)) + +#define vcpu_has_sve(vcpu) hyp_state_has_sve(&hyp_state(vcpu)) #ifdef CONFIG_ARM64_PTR_AUTH -#define vcpu_has_ptrauth(vcpu) \ +#define hyp_state_has_ptrauth(hyps) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) && \ - (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH) + hyp_state_flags(hyps) & KVM_ARM64_GUEST_HAS_PTRAUTH) #else -#define vcpu_has_ptrauth(vcpu) false +#define hyp_state_has_ptrauth(hyps) false #endif +#define vcpu_has_ptrauth(vcpu) hyp_state_has_ptrauth(&hyp_state(vcpu)) + #define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt) /* VCPU Context accessors (direct) */ @@ -794,8 +807,11 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm) int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); +#define kvm_arm_hyp_state_sve_finalized(hyps) \ + (hyp_state_flags((hyps)) & KVM_ARM64_VCPU_SVE_FINALIZED) + #define kvm_arm_vcpu_sve_finalized(vcpu) \ - ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) + kvm_arm_hyp_state_sve_finalized(&hyp_state(vcpu)) #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) From patchwork Fri Sep 24 12:53:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E583C433F5 for ; Fri, 24 Sep 2021 13:01:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A17461265 for ; Fri, 24 Sep 2021 13:01:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0A17461265 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=f1pX1msSh79AFd9dJ2qyVGJRJszCYZB4Ry6aGvsAVGo=; b=hEwxC0dvPnAtx96aLx1qOLBnOH sGgiFyrvKbi+pAEssX/nlCHNO2Co+y1yXEEJQvpwgW8KwjiqEgiUvyzsfz471q44d5qm1m2FtQqRy B+/HbfBauuHtGWqCwrvm0aQlcrHUaEnbNmh6SAExTWWfUM05VvAdhpBHRoVXZH/7SfvP9XDFxU2ke ig4ofy7SUeZFHAYHoaizsPAV/GT84wGhe/TJQej3mmgmsrKAXBzwdFVxZ+craQDNfBPvpV25wYzOK jjY8cVedSI7fRtvVncvSt+zzw6mjCJm7LMi+M315GtUs2gU6ZRsNc0mXBMlUkoi0zQcMIZPW1fd46 6323spVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkn5-00EOAQ-Oa; Fri, 24 Sep 2021 12:59:00 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkig-00EM6y-TM for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:28 +0000 Received: by mail-qk1-x749.google.com with SMTP id j19-20020a05620a411300b0045dc3262e59so6654430qko.22 for ; Fri, 24 Sep 2021 05:54:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LZOV2dq//fdarSZ1UQXRXLqBKRPy36MQCgMnGFxeKiY=; b=i7wNsSyEZO3Xlq4SY8WT2jMhEBq6yM4BseM6UxtVbhob5zKaZA1doTwn0DklaFJS2J LjLo/74OYhBloKd6aumy8koxoHfNV7/DYJzKOeMqQ65NyinAyxqchr5aDy0CYpDfOw3t JM2pDEncfpfat3Mbl+MfvITadVXzbrAUxqM69iij9wjqJ+8GPmmWmw86KHpz6RuUYLq/ bt7kxSeLwbBL2g8F9PT4IrcOvb1n3g5mAneeZ2GG+fnnw4rWmPYIEbRrY2TEHKTtuZcC fCpIu4w6jLfF+Eaqz8z6+bgq8cwl75ERIOks6W689Aq7x5XtM7RrPYZzgBTyOtyEjH7L yyfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LZOV2dq//fdarSZ1UQXRXLqBKRPy36MQCgMnGFxeKiY=; b=ub3hafF/fey6Z54+a/CXypzToZIeQgffPwY8rs219egqRuchlnQDmMq/LDyoyD8d9X i2VOzFmJxXQKvfFLfPjUBAjC3u5vQG1oEpg0I8ihtTtczlIvsNZhYTjXTH7AmWExMd/I FRsAnl1yoOS+4ZQk/UaPaMW/WVqLgDODiuvZ3UjuHExjEmTrJ89WsmRLxKf7JGH6OId1 ot9RW8BN0V3/gzEDWuBbuVk4hVUdv5EANS13Dm8v/zoH9mlM3q1bsgEhvEo8EXbvlnwE n5GnLMFPLFH7E8pK2tvpymMrk670bA7mVycmLHx/hNS4U5Jtx0xhBLZ/LvGC+12f6fII t3bg== X-Gm-Message-State: AOAM533MsLJhzBPZbMGfUhlXe3lUhheJK+hAXW3LWtkAJq+iSY89z0e0 03EGhAwIoU6ZJkc+UpBv4Q22YOgl7A== X-Google-Smtp-Source: ABdhPJyiUxiESUbsfUSqdfuMXJkScSsmcEa94A8nBj+cCsvl5tfCKO45MzFdUPEtgqstBChxPe4XLicMqw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1022:: with SMTP id k2mr102283qvr.53.1632488065246; Fri, 24 Sep 2021 05:54:25 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:40 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-12-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055426_981502_D58926E0 X-CRM114-Status: GOOD ( 18.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Create a struct for the hypervisor state from the related fields in vcpu_arch. This is needed in future patches to reduce the scope of functions from the vcpu as a whole to only the relevant state, via this newly created struct. Create a new instance of this struct in vcpu_arch and fix the accessors to use the new fields. Remove the existing fields from vcpu_arch. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++------------- arch/arm64/kernel/asm-offsets.c | 2 +- 2 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3e5c173d2360..dc4b5e133d86 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -269,27 +269,35 @@ struct vcpu_reset_state { bool reset; }; +/* Holds the hyp-relevant data of a vcpu.*/ +struct vcpu_hyp_state { + /* HYP configuration */ + u64 hcr_el2; + u32 mdcr_el2; + + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ + u64 vsesr_el2; + + /* Exception Information */ + struct kvm_vcpu_fault_info fault; + + /* Miscellaneous vcpu state flags */ + u64 flags; +}; + struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; void *sve_state; unsigned int sve_max_vl; + struct vcpu_hyp_state hyp_state; + /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; - /* HYP configuration */ - u64 hcr_el2; - u32 mdcr_el2; - - /* Exception Information */ - struct kvm_vcpu_fault_info fault; - /* State of various workarounds, see kvm_asm.h for bit assignment */ u64 workaround_flags; - /* Miscellaneous vcpu state flags */ - u64 flags; - /* * We maintain more than a single set of debug registers to support * debugging the guest from the host and to maintain separate host and @@ -356,9 +364,6 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; - /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ - u64 vsesr_el2; - /* Additional reset state */ struct vcpu_reset_state reset_state; @@ -373,7 +378,7 @@ struct kvm_vcpu_arch { } steal; }; -#define hyp_state(vcpu) ((vcpu)->arch) +#define hyp_state(vcpu) ((vcpu)->arch.hyp_state) /* Accessors for hyp_state parameters related to the hypervistor state. */ #define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2 @@ -633,7 +638,7 @@ void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); #ifndef __KVM_NVHE_HYPERVISOR__ -#define kvm_call_hyp_nvhe(f, ...) \ +#define kvm_call_hyp_nvhe(f, ...) \ ({ \ struct arm_smccc_res res; \ \ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index c2cc3a2813e6..1776efc3cc9d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -107,7 +107,7 @@ int main(void) BLANK(); #ifdef CONFIG_KVM DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); - DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); + DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1])); From patchwork Fri Sep 24 12:53:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A24DC4332F for ; Fri, 24 Sep 2021 13:02:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3AA5161019 for ; Fri, 24 Sep 2021 13:02:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3AA5161019 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=f8uZ4fTLK6wA8I9BUZLpC0BWae2ExQbGj35PmbkaOec=; b=E04yNJObqABqlZIra1CfXkv55d 8u81N/eD6jV/YWfo+Cgm+dWbHrjsh/E5aIvOx+mH+FZSrjDE+BonTQNQNUKAWf+saDV3cPIRj4e0p sMPpNAw6QuiaNjbeQAVuaXMXuTN47iGl25ckByXlp3Pt9lT18AFmCtxj+iKyt/X13pq30MchfEuEL inEXIrOukZ9Xsjau37NOFAerUoqLrh/PO50d57OTosq2fjHuV726UP9vD17u6Phe2T9e6U6IY6xy7 58NgTDW3w6P8vPyFutu+F5gGevv2iGsOhxUwRk+KWtn0Idu9JNBfaw1x3yufd49Hwe/8HoLFBJafH U8lE6K4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkoH-00EOmO-Rd; Fri, 24 Sep 2021 13:00:15 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkij-00EM7e-7O for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:32 +0000 Received: by mail-wr1-x449.google.com with SMTP id u10-20020adfae4a000000b0016022cb0d2bso7956759wrd.19 for ; Fri, 24 Sep 2021 05:54:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rk3i6cMxfEloSO2uDCLcRGjjlI1NB6fkfB9BEPfgUdo=; b=G8Y/sw0GQZ9Ra9Rgq2GzZpX2BdysDUEE0gmSOnfZAHAaQlZl7Tm7mOztV63Slfq2ZG x67Hw5t941WzbpjFLbI3pJgRX7+hd0i8LfMFNdodT321JuulQVmnQyTfGFryVwllHmFG 1vXBiGtE8lTNYUQRKRh3ucYwytau/zU07sTgng/OEmVb/yKk6fK32yXmVVNdAMX3nIVY bxDohiBC3hqUkkOoDQ/FlGSkyUPRE37m0TcmDzC1hOCUYHJPS0Hu/iWubM5XjF2G8SLX 0c2eNWMayqjeA2N2yrVkrQADnSUfxxLQaXfW3YkPV+65CLA4XF4jfo9AYqbpw1B5NBX9 ex/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rk3i6cMxfEloSO2uDCLcRGjjlI1NB6fkfB9BEPfgUdo=; b=bwGtXN0NKFsAW5ukqDhfMwDh+l6ZTFctJ1OVMTSkpN67YaWY55lloVafxX7o2GNGfB KvXqznFfW3XB0CCBXy+882f1y7A79qP8I6Ab6P0FVry6ts8rw1b6dG5H3k4kIECs1VsM zU2ZSWLzmkseFetDtq7ldBwjTErGhs//CzoEuCvZq7QY0akXAgrr7nGecAtU6eoepXJ1 JRzpq58sCcBfSeLGm6XFqqQ+RV6QISmFrwjbaJedi+ToGo3ObKAn4QuWJBqypPP2dvKJ 1CFNKpClEWJbqm4WkuFvhot/20ZU5GY5D1jbx/H1rOYxWZCeicgnWk8ymj+5euzT8sd8 FR/A== X-Gm-Message-State: AOAM530lVA8l5x3scr5F1Gyj+bAAwlk2sewgDNDyBj665749Tkbnw7YG nnIb+CDEqUKfuIuf13jvpdTkdGwqEg== X-Google-Smtp-Source: ABdhPJwEEm2A93CZhB3pUAiv6FWHfC7StwWuqqHbyJROswwWrTmU0101uCP6/VUst3ROn1eX+iAVc70wlQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:58d0:: with SMTP id o16mr11711045wrf.392.1632488067286; Fri, 24 Sep 2021 05:54:27 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:41 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-13-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055429_336227_C351D61D X-CRM114-Status: GOOD ( 22.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Many functions don't need access to the vcpu structure, but only the hyp_state. Reduce their scope. This applies the semantic patches with the following commands: FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" spatch --sp-file cocci_refactor/add_hypstate.cocci $FILES --in-place spatch --sp-file cocci_refactor/use_hypstate.cocci $FILES --in-place This patch adds variables that may be unused. These will be removed at the end of this patch series. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/aarch32.c | 2 + arch/arm64/kvm/hyp/exception.c | 19 +++++--- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 2 + arch/arm64/kvm/hyp/include/hyp/switch.h | 54 +++++++++++++--------- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 6 ++- arch/arm64/kvm/hyp/nvhe/switch.c | 21 +++++---- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 1 + arch/arm64/kvm/hyp/vgic-v3-sr.c | 29 ++++++++++++ arch/arm64/kvm/hyp/vhe/switch.c | 25 +++++----- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 +- 11 files changed, 112 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 2e2b60a1b6c7..2737e05a16b2 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -94,7 +94,7 @@ void __sve_save_state(void *sve_pffr, u32 *fpsr); void __sve_restore_state(void *sve_pffr, u32 *fpsr); #ifndef __KVM_NVHE_HYPERVISOR__ -void activate_traps_vhe_load(struct kvm_vcpu *vcpu); +void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps); void deactivate_traps_vhe_put(void); #endif diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index 27ebfff023ff..2d45e13d1b12 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = { */ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { + const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; @@ -126,6 +127,7 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) */ void kvm_skip_instr32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 4514e345c26f..d4c2905b595d 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -59,26 +59,31 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { + const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); } static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } @@ -326,9 +331,10 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, static void kvm_inject_exception(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { - switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { + switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; @@ -343,7 +349,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) break; } } else { - switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { + switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, @@ -366,13 +372,14 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) */ void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) { + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); - vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION | + hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); - } else if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) { + } else if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) { kvm_skip_instr(vcpu); - vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC; + hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC; } } diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 20dde9dbc11b..9bbe452a461a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -15,6 +15,7 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { kvm_skip_instr32(vcpu); @@ -33,6 +34,7 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) */ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 370a8fb827be..5ee8aac86fdc 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -36,6 +36,7 @@ extern struct exception_table_entry __stop___kvm_ex_table; /* Check whether the FP regs were dirtied while in the host-side run loop: */ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); /* * When the system doesn't support FP/SIMD, we cannot rely on * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an @@ -45,15 +46,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) */ if (!system_supports_fpsimd() || vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | + hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST); - return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED); + return !!(hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED); } /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -63,6 +65,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point @@ -79,7 +82,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) } } -static inline void __activate_traps_common(struct kvm_vcpu *vcpu) +static inline void __activate_traps_common(struct vcpu_hyp_state *vcpu_hyps) { /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); @@ -94,7 +97,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } - write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); + write_sysreg(hyp_state_mdcr_el2(vcpu_hyps), mdcr_el2); } static inline void __deactivate_traps_common(void) @@ -104,9 +107,9 @@ static inline void __deactivate_traps_common(void) write_sysreg(0, pmuserenr_el0); } -static inline void ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct vcpu_hyp_state *vcpu_hyps) { - u64 hcr = vcpu_hcr_el2(vcpu); + u64 hcr = hyp_state_hcr_el2(vcpu_hyps); if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) hcr |= HCR_TVM; @@ -114,10 +117,10 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg(hcr, hcr_el2); if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2); + write_sysreg_s(hyp_state_vsesr_el2(vcpu_hyps), SYS_VSESR_EL2); } -static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) +static inline void ___deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { /* * If we pended a virtual abort, preserve it until it gets @@ -125,9 +128,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) * the crucial bit is "On taking a vSError interrupt, * HCR_EL2.VSE is cleared to 0." */ - if (vcpu_hcr_el2(vcpu) & HCR_VSE) { - vcpu_hcr_el2(vcpu) &= ~HCR_VSE; - vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE; + if (hyp_state_hcr_el2(vcpu_hyps) & HCR_VSE) { + hyp_state_hcr_el2(vcpu_hyps) &= ~HCR_VSE; + hyp_state_hcr_el2(vcpu_hyps) |= read_sysreg(hcr_el2) & HCR_VSE; } } @@ -191,18 +194,18 @@ static inline bool __get_fault_info(u64 esr, struct kvm_vcpu_fault_info *fault) return true; } -static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __populate_fault_info(struct vcpu_hyp_state *vcpu_hyps) { u8 ec; u64 esr; - esr = vcpu_fault(vcpu).esr_el2; + esr = hyp_state_fault(vcpu_hyps).esr_el2; ec = ESR_ELx_EC(esr); if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) return true; - return __get_fault_info(esr, &vcpu_fault(vcpu)); + return __get_fault_info(esr, &hyp_state_fault(vcpu_hyps)); } static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) @@ -217,6 +220,7 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), @@ -227,6 +231,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) /* Check for an FPSIMD/SVE trap and handle as appropriate */ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); bool sve_guest, sve_host; u8 esr_ec; @@ -236,8 +241,8 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) return false; if (system_supports_sve()) { - sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE; + sve_guest = hyp_state_has_sve(vcpu_hyps); + sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE; } else { sve_guest = false; sve_host = false; @@ -268,13 +273,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) } isb(); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) { + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_HOST) { if (sve_host) __hyp_sve_save_host(vcpu); else __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST; + hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_FP_HOST; } if (sve_guest) @@ -287,13 +292,14 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), fpexc32_el2); - vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED; + hyp_state_flags(vcpu_hyps) |= KVM_ARM64_FP_ENABLED; return true; } static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); @@ -303,7 +309,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) * The normal sysreg handling code expects to see the traps, * let's not do anything here. */ - if (vcpu_hcr_el2(vcpu) & HCR_TVM) + if (hyp_state_hcr_el2(vcpu_hyps) & HCR_TVM) return false; switch (sysreg) { @@ -388,11 +394,12 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; - if (!vcpu_has_ptrauth(vcpu) || + if (!hyp_state_has_ptrauth(vcpu_hyps) || !esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu))) return false; @@ -419,9 +426,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) */ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR); + hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); if (ARM_SERROR_PENDING(*exit_code)) { u8 esr_ec = kvm_vcpu_trap_get_class(vcpu); @@ -465,7 +473,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) if (__hyp_handle_ptrauth(vcpu)) goto guest; - if (!__populate_fault_info(vcpu)) + if (!__populate_fault_info(vcpu_hyps)) goto guest; if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index d49985e825cd..7bc8b34b65b2 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -158,6 +158,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -170,12 +171,13 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); - if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY) ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -188,7 +190,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); - if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), dbgvcr32_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index ac7529305717..d9326085387b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -36,11 +36,12 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; - ___activate_traps(vcpu); - __activate_traps_common(vcpu); + ___activate_traps(vcpu_hyps); + __activate_traps_common(vcpu_hyps); val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; @@ -67,13 +68,12 @@ static void __activate_traps(struct kvm_vcpu *vcpu) } } -static void __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char __kvm_hyp_host_vector[]; u64 mdcr_el2, cptr; - ___deactivate_traps(vcpu); + ___deactivate_traps(vcpu_hyps); mdcr_el2 = read_sysreg(mdcr_el2); @@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)) + if (hyp_state_has_sve(vcpu_hyps) && (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)) cptr |= CPTR_EL2_TZ; write_sysreg(cptr, cptr_el2); @@ -170,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -236,12 +237,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __timer_disable_traps(); __hyp_vgic_save_state(vcpu); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); @@ -270,15 +271,17 @@ void __noreturn hyp_panic(void) u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct vcpu_hyp_state *vcpu_hyps; struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_hyps = &hyp_state(vcpu); vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu) { __timer_disable_traps(); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); } diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 8dbc39026cc5..84304d6d455a 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -36,6 +36,7 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) */ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index bdb03b8e50ab..725b2976e7c2 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void) static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 esr = kvm_vcpu_get_esr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -674,6 +675,7 @@ static int __vgic_v3_clear_highest_active_priority(void) static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; @@ -733,6 +735,7 @@ static void __vgic_v3_bump_eoicount(void) static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; @@ -757,6 +760,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; @@ -795,18 +799,21 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -820,6 +827,7 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -833,18 +841,21 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -863,6 +874,7 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -884,6 +896,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; @@ -897,6 +910,7 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -909,6 +923,7 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 0); } @@ -916,48 +931,56 @@ static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 1); } static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 2); } static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 3); } static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 0); } static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 1); } static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 2); } static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 3); } static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; @@ -978,6 +1001,7 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; @@ -986,6 +1010,7 @@ static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -999,6 +1024,7 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); ctxt_set_reg(vcpu_ctxt, rt, val); @@ -1006,6 +1032,7 @@ static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; @@ -1028,6 +1055,7 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -1046,6 +1074,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 0113d442bc95..c9da0d1c7e72 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -33,10 +33,11 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; - ___activate_traps(vcpu); + ___activate_traps(vcpu_hyps); val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; @@ -54,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val |= CPTR_EL2_TAM; if (update_fp_enabled(vcpu)) { - if (vcpu_has_sve(vcpu)) + if (hyp_state_has_sve(vcpu_hyps)) val |= CPACR_EL1_ZEN; } else { val &= ~CPACR_EL1_FPEN; @@ -67,12 +68,11 @@ static void __activate_traps(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__activate_traps); -static void __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char vectors[]; /* kernel exception vectors */ - ___deactivate_traps(vcpu); + ___deactivate_traps(vcpu_hyps); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); @@ -88,10 +88,9 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__deactivate_traps); -void activate_traps_vhe_load(struct kvm_vcpu *vcpu) +void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __activate_traps_common(vcpu); + __activate_traps_common(vcpu_hyps); } void deactivate_traps_vhe_put(void) @@ -110,6 +109,7 @@ void deactivate_traps_vhe_put(void) /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -149,11 +149,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_save_guest_state_vhe(guest_ctxt); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); @@ -164,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; @@ -202,13 +203,15 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct vcpu_hyp_state *vcpu_hyps; struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_hyps = &hyp_state(vcpu); vcpu_ctxt = &vcpu_ctxt(vcpu); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n", diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 37f56b4743d0..1571c144e9b0 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); */ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -82,7 +83,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = true; - activate_traps_vhe_load(vcpu); + activate_traps_vhe_load(vcpu_hyps); } /** @@ -98,6 +99,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E99F2C433F5 for ; Fri, 24 Sep 2021 13:03:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C114A610CB for ; Fri, 24 Sep 2021 13:03:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C114A610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=2PT16PzUfdfrEcjsWNEhAfU6QMGDW/W4P+PYMhywMP4=; b=OKsEugzbEuXoCb27Dh0rZuPHMg k6dTCGmVBVj0PIao2DehhUofaBo7BXLTq0f01JRZUq0dWVckkdrbgklN65mSDZ+aFGvzbcbSbtK/n K8JAQ6wzghDxr1bysG3fQUd8rabYHseXSODFrjxOpjEzwyk+vstwbieUYLczUm+wjN6fIRoX1o8jZ hA2BcHu+YAKZvNhuLMzzuovO4t+oJYHRfSLyNGZs6ZDUyF0S8lubbRnZg179P2CAbfFrfKSiw4/cT 7SeAP0FXY1GtZyFRqz8qkHfqhNlopQu7bizkF6LxryrMDA6TmEPQS9sZKWUnUI2VmVz49Nf8yuEsw OWlkK+nw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkpG-00EPG0-8r; Fri, 24 Sep 2021 13:01:15 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkil-00EM8I-05 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:33 +0000 Received: by mail-qv1-xf49.google.com with SMTP id p75-20020a0c90d1000000b0037efc8547d4so30124178qvp.16 for ; Fri, 24 Sep 2021 05:54:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tcAwNt45Vh1nx0SA/+pHzECez1ST6+QmIJi+nsrNrnY=; b=rRJqgmjLCucwh+wufx8j237AwU9kX/U6m3gu4FcHEgCl+7TjZpNNRxFyoVmz2wJBhi SyVaJHfYhbMRpDDAZD4yRAxl3UbZs8T1bKZ4mlI1LrPCKDeS9f+Qj6WH4IP2NuyoO4tT VupZMeTBPVWdBvyzICj7uTz7Rkrth+DuXDdeD4CFu4yB1rKnLlkaOJhA/3ZcWF5DX78d xCXOMpDIn9HAdiF1sHJUrZa1jdz0AKjfBdkfhTKIZVMCcz5n+Gw+nKohd8+YBRfoUFep YuBpZr6s8s/Dsj1iCua9XlGC2xMHtcNAyDby42cvr1ruvD75NDgPLpDA/yxRoaWHy83P GBxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tcAwNt45Vh1nx0SA/+pHzECez1ST6+QmIJi+nsrNrnY=; b=msf+U5rRxr2HdOkgW/NywyPEInKWCGy6r3LXl1iADiibvM0v+WFmPOtTPdLDsD6R0a JbdcmHeI/AU7L4M0V3StOl8Or4R2nrLj+VaySrzR2QTBit+dF5qlWxkaET9AT6JAbFzD ZdygKOw7adKDj4uDsaTWeFcFnaAmGla1yE1qbm2xD/2Byr6NMI6PNGV7RzzlOlsjk/kJ De0QqHhDG5KKh7ClutGvUQX1N5ecyukpM4kj91PscARV6B5/q/s+/bSAqNux8Qcj39Dk LmzxLrtIcO/PoehtzjVxIftJN+BC/FvVxzltMhfWAwCl0AtFQahCv74GfubEyD51Ffie P1PA== X-Gm-Message-State: AOAM532sx/azNH94nHSfPBbhwZWM+u07QPp39dxakRBL7JFDhTESFqUK 7RgQTLYu/PtwNOTL8l+77IeLjBN1dg== X-Google-Smtp-Source: ABdhPJwH2gVvKjdWYkyW8o/mToo0z94tmPdYf5a/AXeifZH/51OHiESFtVRdGTkNYrDA6ygLN0UnaBeGBg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:4a8f:: with SMTP id h15mr9910607qvx.2.1632488069370; Fri, 24 Sep 2021 05:54:29 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:42 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-14-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055431_092085_7187EFDE X-CRM114-Status: GOOD ( 20.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __kvm_skip_instr, kvm_condition_valid, and __kvm_adjust_pc are passed the vcpu when all they need is the context as well as the hypervisor state. Refactor them to use these instead. These functions are called directly or indirectly in future patches from contexts that don't have access to the whole vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 15 ++++++++++----- arch/arm64/kvm/hyp/aarch32.c | 14 +++++--------- arch/arm64/kvm/hyp/exception.c | 19 ++++++++++--------- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 14 ++++++-------- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 6 +++--- arch/arm64/kvm/hyp/vgic-v3-sr.c | 4 ++-- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- 9 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index e095afeecd10..28fc4781249e 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -33,8 +33,8 @@ enum exception_type { except_type_serror = 0x180, }; -bool kvm_condition_valid32(const struct kvm_vcpu *vcpu); -void kvm_skip_instr32(struct kvm_vcpu *vcpu); +bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps); +void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_vabt(struct kvm_vcpu *vcpu); @@ -162,14 +162,19 @@ static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu)); } -static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) +static __always_inline bool __kvm_condition_valid(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps) { - if (vcpu_mode_is_32bit(vcpu)) - return kvm_condition_valid32(vcpu); + if (ctxt_mode_is_32bit(vcpu_ctxt)) + return kvm_condition_valid32(vcpu_ctxt, vcpu_hyps); return true; } +static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) +{ + return __kvm_condition_valid(&vcpu->arch.ctxt, &hyp_state(vcpu)); +} + static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt) { *ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT; diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index 2d45e13d1b12..2feb2f8d9907 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -44,20 +44,18 @@ static const unsigned short cc_map[16] = { /* * Check if a trapped instruction should have been executed or not. */ -bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) +bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps) { - const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; int cond; /* Top two bits non-zero? Unconditional. */ - if (kvm_vcpu_get_esr(vcpu) >> 30) + if (kvm_hyp_state_get_esr(vcpu_hyps) >> 30) return true; /* Is condition field valid? */ - cond = kvm_vcpu_get_condition(vcpu); + cond = kvm_hyp_state_get_condition(vcpu_hyps); if (cond == 0xE) return true; @@ -125,15 +123,13 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) * kvm_skip_instr - skip a trapped instruction and proceed to the next * @vcpu: The vcpu pointer */ -void kvm_skip_instr32(struct kvm_vcpu *vcpu) +void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT); - if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu)) + if (is_thumb && !kvm_hyp_state_trap_il_is32bit(vcpu_hyps)) pc += 2; else pc += 4; diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index d4c2905b595d..a08806efe031 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -329,11 +329,9 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, *ctxt_pc(vcpu_ctxt) = vect_offset; } -static void kvm_inject_exception(struct kvm_vcpu *vcpu) +static void kvm_inject_exception(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu_el1_is_32bit(vcpu)) { + if (hyp_state_el1_is_32bit(vcpu_hyps)) { switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); @@ -370,16 +368,19 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) * Adjust the guest PC (and potentially exception state) depending on * flags provided by the emulation code. */ -void __kvm_adjust_pc(struct kvm_vcpu *vcpu) +void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) { - kvm_inject_exception(vcpu); + kvm_inject_exception(vcpu_ctxt, vcpu_hyps); hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); } else if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) { - kvm_skip_instr(vcpu); + kvm_skip_instr(vcpu_ctxt, vcpu_hyps); hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC; } } + +void __kvm_adjust_pc(struct kvm_vcpu *vcpu) +{ + kvm_adjust_pc(&vcpu_ctxt(vcpu), &hyp_state(vcpu)); +} diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 9bbe452a461a..4e0cfbe635e5 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -13,12 +13,10 @@ #include #include -static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) +static inline void kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { - kvm_skip_instr32(vcpu); + kvm_skip_instr32(vcpu_ctxt, vcpu_hyps); } else { *ctxt_pc(vcpu_ctxt) += 4; *ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK; @@ -32,14 +30,12 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) * Skip an instruction which has been emulated at hyp while most guest sysregs * are live. */ -static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) +static inline void __kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); - kvm_skip_instr(vcpu); + kvm_skip_instr(vcpu_ctxt, vcpu_hyps); write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR); write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR); @@ -54,4 +50,6 @@ static inline void kvm_skip_host_instr(void) write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR); } +void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps); + #endif diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 5ee8aac86fdc..075719c07009 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -350,7 +350,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) return false; } - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return true; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d9326085387b..eadbf2ccaf68 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -204,7 +204,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) */ __debug_save_host_buffers_nvhe(vcpu); - __kvm_adjust_pc(vcpu); + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); /* * We must restore the 32-bit state before the sysregs, thanks diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 84304d6d455a..acd0d21394e3 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -55,13 +55,13 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) /* Reject anything but a 32bit access */ if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } /* Not aligned? Don't bother */ if (fault_ipa & 3) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } @@ -85,7 +85,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) ctxt_set_reg(vcpu_ctxt, rd, data); } - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 725b2976e7c2..d025a5830dcc 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -1086,7 +1086,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) esr = kvm_vcpu_get_esr(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } @@ -1198,7 +1198,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) rt = kvm_vcpu_sys_get_rt(vcpu); fn(vcpu, vmcr, rt); - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index c9da0d1c7e72..395274532c20 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -135,7 +135,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __load_guest_stage2(vcpu->arch.hw_mmu); __activate_traps(vcpu); - __kvm_adjust_pc(vcpu); + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); From patchwork Fri Sep 24 12:53:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59362C433FE for ; Fri, 24 Sep 2021 13:04:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 30F7F61362 for ; Fri, 24 Sep 2021 13:04:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 30F7F61362 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vb5W+gYwCO+xRtNpIUn/Ub3uPM0b7qbwMGU4YrX7ddg=; b=vpd9SO009n1uxH4XFHSmsOL4+k oyiQ+8auxVgsaOtKCe0s8GySMB2gadbT1Fqn+4eqeCe2QD/yIXfCpLpV7H3+7eBmDVEbVhkpf7vDB ZtWATkY3yB14ASZlqyG2NuXmPP7tpk1snw8gjhuTPgtl0gb2Vt6JCU59YRqAbP72IzAtU6J5CC+zx r0qFA7sgUh10vRkHq5avZfkCZXVgnHw+IoetjF3j46yLDvevsk9tRzCtoQYeY+pW97K86PjkeljGX SaYMNwz+ibnCY3eti0ieaYuOfOkmaj1TC3eoNAudWgihywaIOw7y01CZEQkikZeK5seTbFZhD/aej Glzs04dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkpv-00EPaG-5N; Fri, 24 Sep 2021 13:01:55 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkin-00EM8u-Kf for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:35 +0000 Received: by mail-wr1-x44a.google.com with SMTP id s14-20020adff80e000000b001601b124f50so7999657wrp.5 for ; Fri, 24 Sep 2021 05:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iSSWi58NWKOpw0/j/jthgGrijZ2kgwmPgBIUtV8ETrY=; b=YPs8fe54kX7jNCksdMFXe0yl0eg5/mV5V/gzTwhmbgLTmlcb/RfkH5tjD8pG9ykQI7 4qHQSge5XK5NRfCYmF9+FR9qVClA0ZxZvGXQrNRTSlpydveHokALYA9t+7sl2sEi4i8K i6IG06/j1uJFNz94nZsXZIM/NqOhAWYJOW+oBT2D1jNBt/D8sjKIUvaXhfzaK63UDkM6 RHekE5GpHYY+vDcJTV32Z70zHzoHPzx/JQD2Px5+LnYr13K0/uaGdyhvCqnCDZs+Ako+ LpJP8KUeCbQodUcsenuY6SKU5OuodaMJefDk6yOTBf0RcF38yZgBqYjk0BGysIbgDT26 pEug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iSSWi58NWKOpw0/j/jthgGrijZ2kgwmPgBIUtV8ETrY=; b=QB8tIij0MQRuNFtleHDI21j0jaqJoGwPJB9WcNWP5qOcopbp+F7YrsiqC1UhjdmTY6 robA9He2+0bdQ4htd6kAnfFaMeRFJlTv06CY8mh09RpSSIMYRpXfeConcABslvvY5v2G zQeZmTjndur4YVYfGIAC9DBD63PKiyAqpAR8NUfnibTBJf1lszIGzZZ/MRNa3kEW6H06 +JqJPfoGpLNNJkavYN99uvZBJ9C7J2LeWGGuZ6Gf2j40G0lV28hxLly5ynAmiwqS7j5D Jj4nd2gwlzfP+1eqclVR9U0g0Co7MYldE3aVp2FYtXA9LBu0mowEVWiXxP3C7QiS+SGg XYXw== X-Gm-Message-State: AOAM532rlEGblw+CpSIraWzn7mXuwabX7oDLRqROuTsmXks3qdrCAfoM X0fb7NKI9/3PbIOSLK8EbCENICwEfQ== X-Google-Smtp-Source: ABdhPJwfXwizprVtGpfhH9CMgSvy4oJQcTlOwGhEYh7h/UCWQzWob21fBAkjNrZp9qiNYIRW3sxPzk3w1w== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c405:: with SMTP id k5mr1951606wmi.24.1632488071450; Fri, 24 Sep 2021 05:54:31 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:43 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-15-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2 From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055433_765403_3AEFA26C X-CRM114-Status: GOOD ( 19.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vgic v2 interface functions are passed vcpu, when the state that they need is the vgic distributor, as well as the kvm_cpu_context and the recently created vcpu_hyp_state. Reduce the scope of its interface functions to these structs. Pass the vgic distributor to fixup_guest_exit so that it's not dependent on struct kvm for the vgic state. NOTE: this change to fixup_guest_exit is temporary, and will be tidied up in in a subsequent patch in this series. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 16 ++++++---------- arch/arm64/kvm/hyp/vhe/switch.c | 3 ++- 5 files changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 2737e05a16b2..d9a8872a7efb 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -55,7 +55,7 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); */ #define __kvm_swab32(x) ___constant_swab32(x) -int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu); +int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps); void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 075719c07009..30fcfe84f609 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -424,7 +424,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); @@ -486,7 +486,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) !kvm_vcpu_abt_iss1tw(vcpu); if (valid) { - int ret = __vgic_v2_perform_cpuif_access(vcpu); + int ret = __vgic_v2_perform_cpuif_access(vgic, vcpu_ctxt, vcpu_hyps); if (ret == 1) goto guest; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index eadbf2ccaf68..164b0f899f7b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -172,6 +172,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + struct kvm *kvm = kern_hyp_va(vcpu->kvm); + struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -230,7 +232,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) exit_code = __guest_enter(vcpu); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index acd0d21394e3..787f973af43a 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -34,19 +34,15 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) * 0: Not a GICV access * -1: Illegal GICV access successfully performed */ -int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; phys_addr_t fault_ipa; void __iomem *addr; int rd; /* Build the full address */ - fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); - fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); + fault_ipa = kvm_hyp_state_get_fault_ipa(vcpu_hyps); + fault_ipa |= kvm_hyp_state_get_hfar(vcpu_hyps) & GENMASK(11, 0); /* If not for GICV, move on */ if (fault_ipa < vgic->vgic_cpu_base || @@ -54,7 +50,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return 0; /* Reject anything but a 32bit access */ - if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) { + if (kvm_hyp_state_dabt_get_as(vcpu_hyps) != sizeof(u32)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } @@ -65,11 +61,11 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return -1; } - rd = kvm_vcpu_dabt_get_rd(vcpu); + rd = kvm_hyp_state_dabt_get_rd(vcpu_hyps); addr = kvm_vgic_global_state.vcpu_hyp_va; addr += fault_ipa - vgic->vgic_cpu_base; - if (kvm_vcpu_dabt_iswrite(vcpu)) { + if (kvm_hyp_state_dabt_iswrite(vcpu_hyps)) { u32 data = ctxt_get_reg(vcpu_ctxt, rd); if (__is_be(vcpu_ctxt)) { /* guest pre-swabbed data, undo this for writel() */ diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 395274532c20..f315058a50ca 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -111,6 +111,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -145,7 +146,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(vcpu); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); sysreg_save_guest_state_vhe(guest_ctxt); From patchwork Fri Sep 24 12:53:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBD33C433FE for ; Fri, 24 Sep 2021 13:04:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5BFC610CB for ; Fri, 24 Sep 2021 13:04:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5BFC610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qvWm8uktr46Utx+pCqf4OhPlGTxf2aexdzkcLL7SOOg=; b=dPozA8RptAjyaTLgbvIXR0TZMP FUyl60JWy4Se2wyUhphJq9XpBBealMgp+FuR8yd8iu7kQH5viHLUPSYYYnE+Dk0DwRGGQtjevBnWD MtqjlCl/w87NptIURNSxZgot+fHWBLphRDZ9NZuRVURME4JyjnV3MuyaW0d81srR8AWlbBj7qFShp YU8hlrnFmIQIN4jeJ4msGwOc/YZdPCSwO/fjC7YKjOLj/rqfTORIELkM44BgputHNkIHNHoN2LQz3 ISDt/zex9CYyJrRUQmSNe2ZSeFhEXSH66yOIFB50LLiFUcLE3AkNYzzPQqAZkd1PECn7soVef4w30 dgndxe3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkqq-00EQ39-Ju; Fri, 24 Sep 2021 13:02:53 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkip-00EMA0-91 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:38 +0000 Received: by mail-qk1-x749.google.com with SMTP id ay30-20020a05620a179e00b00433294fbf97so32424253qkb.3 for ; Fri, 24 Sep 2021 05:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=TexyxHJdUF1BKJ7n5bPF6ww67h6lEEzkA/FZiA0Gdq52o1XP4F9AnYzrMULrZjPCLZ vFKXDbAPKDTbWQQ1UjQrquLyqY8CWeKRdvSPk7EpWN8QlKGB63nXMTwW9QD8Cb3rwAoA D2v4S87wY6lsjeOwQYJK8yV+YDdG2sAlINVWFov6UWfLKHkM3tkFVbyoyWmP2SRCupQa 7P3pwD5TDmIS4etf6pUSwQrsDQvM+HfDNSJ+LFWzcjNrqMqYrWVF/JnKLpC0Olhyc2xr PJ18FhnSNtm7gDIt6gNigyky/YyG7q1zBbVTQV2f+Qapp7lQgVnvbL5WjFFN8sNoHAA8 tdRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=dIJtcrL3zPAB6U8Yo0/n63p5KLlwmV13ls3HvOygwXKtoVuwcgq84dklHC5lKkVwzS zwc97GOlO+iOudfPxjo8Hg4qHrHIAxh6Pd1/ogf2rLFfKzh/ggr2NPmCQyYqR1vG0Wue fwq4n6+hO4GiM5JzSSc3LTa0eSAaTwvcZgkZyf/GIfhEzzzpfxwIk/eSzKFLyMDSQ+d/ Dam2xdc86+BoIRblS7ZN+dpKVepZ/HTpeUZg3Wi2wuFbhh8sP+dESXif/ozK2yH1Rzdq 8JQy9+wnCHlSaRXMayDEdA9qbPfhkMQozqb6qVDgxH7KMq7pAQUZgKNu2rpFWbOOPgnq dVYw== X-Gm-Message-State: AOAM531JBGBho5I103WlHGN2Mn95ZNEUIzepAs58VSoLpaL7Gtz3Iz4g kKO2zxrltYpwvHCSc1SlG6S8L2rYMQ== X-Google-Smtp-Source: ABdhPJwvUqf3HFnKY+dtVGjbluhQ8kvmArVd8MPsfCPGMinMVDjnOArEBYtnbUYdJdOADNOZ9fnGT01pEw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1243:: with SMTP id q3mr9801960qvv.0.1632488073615; Fri, 24 Sep 2021 05:54:33 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:44 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-16-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055435_404141_102D22B5 X-CRM114-Status: GOOD ( 14.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vgic v3 interface functions are passed vcpu, when the state that they need is the vgic interface, as well as the kvm_cpu_context and the recently created vcpu_hyp_state. Reduce the scope of its interface functions to these structs. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/vgic-v3-sr.c | 247 ++++++++++++++++++-------------- 1 file changed, 137 insertions(+), 110 deletions(-) diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index d025a5830dcc..3e1951b04fce 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -471,11 +471,10 @@ static int __vgic_v3_bpr_min(void) return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; return crm != 8; @@ -483,10 +482,11 @@ static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, +static int __vgic_v3_highest_priority_lr(struct vgic_v3_cpu_if *cpu_if, + u32 vmcr, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; u8 priority = GICv3_IDLE_PRIORITY; int i, lr = -1; @@ -522,10 +522,10 @@ static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, return lr; } -static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, +static int __vgic_v3_find_active_lr(struct vgic_v3_cpu_if *cpu_if, int intid, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; int i; for (i = 0; i < used_lrs; i++) { @@ -673,17 +673,18 @@ static int __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr < 0) goto spurious; @@ -733,10 +734,11 @@ static void __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; int lr; @@ -749,7 +751,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vid >= VGIC_MIN_LPI) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -758,16 +760,17 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; u8 lr_prio, act_prio; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); /* Drop priority in any case */ act_prio = __vgic_v3_clear_highest_active_priority(); @@ -780,7 +783,7 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vmcr & ICH_VMCR_EOIM_MASK) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -797,24 +800,27 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -825,10 +831,11 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -839,24 +846,27 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -872,10 +882,11 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -894,13 +905,14 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) val = __vgic_v3_read_ap0rn(n); else val = __vgic_v3_read_ap1rn(n); @@ -908,86 +920,94 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) __vgic_v3_write_ap0rn(val, n); else __vgic_v3_write_ap1rn(val, n); } -static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 0); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 1); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 2); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 3); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 0); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 1); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 2); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 3); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr == -1) goto spurious; @@ -999,19 +1019,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; ctxt_set_reg(vcpu_ctxt, rt, vmcr); } -static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); val <<= ICH_VMCR_PMR_SHIFT; @@ -1022,18 +1044,20 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; vtr = read_gicreg(ICH_VTR_EL2); @@ -1053,10 +1077,11 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & ICC_CTLR_EL1_CBPR_MASK) @@ -1074,16 +1099,18 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; - void (*fn)(struct kvm_vcpu *, u32, int); + void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, + struct vcpu_hyp_state *, u32, int); bool is_read; u32 sysreg; - esr = kvm_vcpu_get_esr(vcpu); + esr = kvm_hyp_state_get_esr(vcpu_hyps); if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); @@ -1195,8 +1222,8 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) } vmcr = __vgic_v3_read_vmcr(); - rt = kvm_vcpu_sys_get_rt(vcpu); - fn(vcpu, vmcr, rt); + rt = kvm_hyp_state_sys_get_rt(vcpu_hyps); + fn(cpu_if, vcpu_ctxt, vcpu_hyps, vmcr, rt); __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); From patchwork Fri Sep 24 12:53:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBAC7C433EF for ; Fri, 24 Sep 2021 13:05:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99EB061019 for ; Fri, 24 Sep 2021 13:05:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 99EB061019 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WcDY/iZk7Y8216V8jrV03uEuxNCI+vE7bht6gdmkxcc=; b=fehPW14etd7j4JCpeHGqCcelUQ sq15vyJV8XDjQkwD0QRYVd67ROBDTa4NKOx1VBa/fKQwgIXixqLL6RXjVGY/1OHYj85cJ/ia9Jn6C eOEg+hJuKALxI7VUyyiUvfoX8EU93eFBt1hWFKDZyhScJyx+E+/+DaKiOXMnNliq5MqBzOyK6yRCz klQp1dPTp54FVti+qsHglkOsIYepO3Y6krhijJaBg328pnLgy5KhJC1eiMY5qO1NEcO0T+Jxg1r+h G69cSpsO0FGBuBBO7cF4CKmhNHK1EFY1oEHuH2ZFEM0fp9o8j4kjp0rmRALeCEWVtIEUYEpDC+/6o zzHBMC2Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkra-00EQQC-WC; Fri, 24 Sep 2021 13:03:40 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkit-00EMBX-1Q for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:40 +0000 Received: by mail-qk1-x749.google.com with SMTP id d7-20020a05620a240700b0045da3bf3509so19812413qkn.19 for ; Fri, 24 Sep 2021 05:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=56Ft6X8W77hY1O7BY33JIx3hfVz1zTOJLBEzRbdZLVw=; b=bNf2MAkSxeUvnC8APurNJOKp8j5WNRVTRWXn3WErUfYnVGPg5v/3n5gcIVXKw4t9W+ ++aRVwRIoyywjT6TVBxQg1NUQyGIG0VzdvUVrPmsmOd0j6qsR/jOMsEc8AodGDClGEYE ilc5+eCrH2jUWhkTldhPyKFHm6/CQrcZGo/No50sHneGSa0fwYmSP74YwjfYULkYD+hn c5Y2MjS7Qyto3oZ343G5CHx9di3IAseWbWfsc1zBK/cjTfGcfNSlwFNI2q+xAnHGH574 EU/1NvgGryTNgZSf9rJArmTNgJZZQh+vL1Qo+AIO/e4ZwxL72jO5CZVB43cxyj7O4E83 ruhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=56Ft6X8W77hY1O7BY33JIx3hfVz1zTOJLBEzRbdZLVw=; b=0WdZWecwZDD59PAQ1AOoI7jLjSyhPr9c5bkEjTqWNPhfq2ppf11e1rJaqNECkpXLgv aCsdCZ8BbDna0JApskBpWh58qz9m8vPRsym2fhOOjl/boga4sAFuobL3R1RxFUemRNS3 D78EvcIG72MLdGdS0R4CFSy2pY/rGq1uF0GBOkSy5x5ZP4ncyrsiXZPdpHZcLbce1CTV FiLNzq+ltq1jdx3rWMSchRSCBngGAAL931IL3Hajirp3pT7eanaLLDnZQfEH+IIFFcNn Gv/seb9aD7lSd1vWzf3g+hySwyeYS2I7cH3XlAUIXPRg6lLViFYO/zA0+UMtKZhjA6sX 9Zow== X-Gm-Message-State: AOAM533acvVpHNLiYdSTKyOTrxbY+hImzXJ0KISMjkEeaaxHf3ajw2m9 piEQoNlcULL71Ly9em4WGFhbqYdwZw== X-Google-Smtp-Source: ABdhPJyUkz++TQP8NYy0o2i3otydUYMEFbi7J6On+R3jI66SeEyVA2YIEWBaY5/7X1wyxFyCNX3yufb4nQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:cb10:: with SMTP id o16mr10045783qvk.57.1632488075438; Fri, 24 Sep 2021 05:54:35 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:45 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-17-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055439_146088_A91D48F6 X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the __vgic_v3_perform_cpuif_access only needs vgic_v3_cpu_if, kvm_cpu_context, vcpu_hyps, pass these rather than the whole vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 4 +++- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 9 ++++----- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index d9a8872a7efb..b379c2b96f33 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -63,7 +63,9 @@ void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); +int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps); #ifdef __KVM_NVHE_HYPERVISOR__ void __timer_enable_traps(void); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 30fcfe84f609..44e76993a9b4 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -502,7 +502,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi if (static_branch_unlikely(&vgic_v3_cpuif_trap) && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { - int ret = __vgic_v3_perform_cpuif_access(vcpu); + int ret = __vgic_v3_perform_cpuif_access(&vcpu->arch.vgic_cpu.vgic_v3, vcpu_ctxt, vcpu_hyps); if (ret == 1) goto guest; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 3e1951b04fce..2c16e0cd45f0 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -1097,11 +1097,10 @@ static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if, write_gicreg(vmcr, ICH_VMCR_EL2); } -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; @@ -1112,7 +1111,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) esr = kvm_hyp_state_get_esr(vcpu_hyps); if (ctxt_mode_is_32bit(vcpu_ctxt)) { - if (!kvm_condition_valid(vcpu)) { + if (!__kvm_condition_valid(vcpu_ctxt, vcpu_hyps)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } From patchwork Fri Sep 24 12:53:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCBA1C433F5 for ; Fri, 24 Sep 2021 13:06:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 90397600D3 for ; Fri, 24 Sep 2021 13:06:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 90397600D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fhrV3WM0hnM+uxHalsUdn2PLDsNTBDJKFetI8AfhmYo=; b=FJ5t5RIDjg5zRRNVYU2GESYDLc wIKUy/iAke+UUcJD54hGarM1vOLFV66lS5nqNMzdlPnQCF9Ii5RfAIuMpy6wBTMzJ3XphaTEiDtIO wLjuCT4tV9Y/CeAIlLqMzvsV85zPTSYGIFOG96xNpVtSvZFzK7+XLRb+BcHky4kdnxQJrIkD74lke +7t5fidkvhGlJols03MVvLwVE9r4SODBoV16lpi5KZ0aqw0ceA0y7acrES27DnUX/g8eV9koXPpWf MJIK5COXvw/F0nfEKGZjjPeuguGb/N2gPAf3GmlGRk7O6YhsuVI/EAaxHsg9pqO9e+/ElHKbatTI1 5dGvxxUw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTksW-00EQqq-Op; Fri, 24 Sep 2021 13:04:37 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkit-00EMBZ-R3 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:42 +0000 Received: by mail-wr1-x44a.google.com with SMTP id z2-20020a5d4c82000000b0015b140e0562so8012104wrs.7 for ; Fri, 24 Sep 2021 05:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KdvACj7ScAwIuEMJbmPRwZN04q8vDtBfcaRexWOega0=; b=Bp+HmPnpsNaDYKd2wJTQKcdK6QUvaLr2dbDESicGuN/CntrraxanjCKThSU6jpa/DF lXU3sJrAsE5wkdJMapqG3vAXGWPGAafdQVDNrEKQB3XoWP4fGSMAajMw9af32GlCD5Ee vbozyCMmUyE2UhxIfhrW79JRFDu5TSDftUpsulHmLrKwy1p7Qvbt20olLYrO1onf0VWp 38ePfEaGMaV4BEhJBoSFZ/mM5AS7AB+OvTbfUeEW6B8XptMp+R+rqnPXo/C8J1esj6Ge YbrGALrWfcIu26o3vHuE9bgXN6VlYcpRb/+eV434kuDpJ8OQL6NyDW9wE8kWoUITIy2A YFnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KdvACj7ScAwIuEMJbmPRwZN04q8vDtBfcaRexWOega0=; b=C+hB6EbpnAfQ2xfi4pGUMYP3+uWSdCk2ueqA9brIGPt1BbIbPQ9LOIPrnHGT2nQKAd fgTCC8n1XqzAUxxBdgbpXrVF6aBaWXFL+5xDr2dUrB1hEhW5NwwXv2+8YW89acF9i2zs H4JMzdnkCEQwZtXUJF2iUGi2OU79cjK6UN9nE6mvsCItKiXIb5YhBEZb8zDKnX1Ai3DC KD0Ed9s7RV0ocVH6BnGiqGe2kufotbXvyiLM1RDGUso8ILZ/AI5VofvvpND9rIXT0idR NnIHpmwnRo003CeDUzFF/GI3VaU34nTOSIVZsqFCwS+5Lfwa3ZFOa+gqYefxnUv/MrJZ +U/Q== X-Gm-Message-State: AOAM533w4pLZSfCqjFeFU+BlEo8ueRJ0DLyAl7/1Rem99A6D14Q9cTRp 81QhcLt9MfiYHOSKxrwh6wU1TmyKUw== X-Google-Smtp-Source: ABdhPJyMxkK6kPHtGxtAN5P/8jRsrvR7E205Ok7blDFoTasMAguIS9tU/tg+pmTXLbz5/Qp+Xl4+87FOgA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:1b48:: with SMTP id b69mr1927489wmb.14.1632488077852; Fri, 24 Sep 2021 05:54:37 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:46 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-18-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055439_932249_A6273578 X-CRM114-Status: GOOD ( 15.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __hyp_running_vcpu exposes struct vcpu, but all that accesses it only need the cpu_ctxt and the hyp state. Start this refactoring by first ensuring that all accesses to __hyp_running_vcpu go via accessors and not directly. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 24 ++++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 7 +++++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 10 ++++----- arch/arm64/kvm/hyp/vhe/switch.c | 8 +++----- 6 files changed, 41 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5e9b33cbac51..766b6a852407 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -251,6 +251,18 @@ extern u32 __kvm_get_mdcr_el2(void); ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] .endm +.macro get_vcpu_ctxt_ptr vcpu, ctxt + get_host_ctxt \ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_CONTEXT +.endm + +.macro get_vcpu_hyps_ptr vcpu, ctxt + get_host_ctxt \ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_HYPS +.endm + .macro get_loaded_vcpu vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] @@ -261,6 +273,18 @@ extern u32 __kvm_get_mdcr_el2(void); str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] .endm +.macro get_loaded_vcpu_ctxt vcpu, ctxt + adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_CONTEXT +.endm + +.macro get_loaded_vcpu_hyps vcpu, ctxt + adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_HYPS +.endm + /* * KVM extable for unexpected exceptions. * In the same format _asm_extable, but output to a different section so that diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index dc4b5e133d86..4b01c74705ad 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -230,6 +230,13 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; +#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu +#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu) +#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu + +#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL +#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL + struct kvm_pmu_events { u32 events_host; u32 events_guest; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1776efc3cc9d..1ecc55570acc 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -107,6 +107,7 @@ int main(void) BLANK(); #ifdef CONFIG_KVM DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); + DEFINE(VCPU_HYPS, offsetof(struct kvm_vcpu, arch.hyp_state)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 7bc8b34b65b2..df9cd2177e71 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -80,7 +80,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR); write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR); - } else if (!ctxt->__hyp_running_vcpu) { + } else if (!is_hyp_running_vcpu(ctxt)) { /* * Must only be done for guest registers, hence the context * test. We're coming from the host, so SCTLR.M is already @@ -109,7 +109,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) if (!has_vhe() && cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) && - ctxt->__hyp_running_vcpu) { + is_hyp_running_vcpu(ctxt)) { /* * Must only be done for host registers, hence the context * test. Pairs with nVHE's __deactivate_traps(). diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 164b0f899f7b..12c673301210 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -191,7 +191,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) } host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - host_ctxt->__hyp_running_vcpu = vcpu; + set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); @@ -261,7 +261,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (system_uses_irq_prio_masking()) gic_write_pmr(GIC_PRIO_IRQOFF); - host_ctxt->__hyp_running_vcpu = NULL; + set_hyp_running_vcpu(host_ctxt, NULL); return exit_code; } @@ -274,12 +274,10 @@ void __noreturn hyp_panic(void) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; - struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = host_ctxt->__hyp_running_vcpu; - vcpu_hyps = &hyp_state(vcpu); - vcpu_ctxt = &vcpu_ctxt(vcpu); + vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_hyps = get_hyp_running_hyps(host_ctxt); if (vcpu) { __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index f315058a50ca..14c434e00914 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -117,7 +117,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) u64 exit_code; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - host_ctxt->__hyp_running_vcpu = vcpu; + set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; sysreg_save_host_state_vhe(host_ctxt); @@ -205,12 +205,10 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; - struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = host_ctxt->__hyp_running_vcpu; - vcpu_hyps = &hyp_state(vcpu); - vcpu_ctxt = &vcpu_ctxt(vcpu); + vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_hyps = get_hyp_running_hyps(host_ctxt); __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); From patchwork Fri Sep 24 12:53:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B713C433F5 for ; Fri, 24 Sep 2021 13:09:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1D0E760EE7 for ; Fri, 24 Sep 2021 13:09:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1D0E760EE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=q18r/+mdm4fGQUK84JDODzxTOzV0vHF5SGMlhlqU51s=; b=RTL3c1VY5d6790PsnOKF0j3QnY 8PEfWfdndpNCd9PRR3G2AwtS7oYIwxfeEDwp2FxTipQWq/KbJJJUOh6JnHLgtUBfY0ya0+o+UC/Or NpiEl4rG+3hDTXhhwP2GZugj/Wvg3hmfn4ZRuRdXP+whr4SLiLE8z/nQPmHR6A7p1fAMIlq3jTwTZ K8f5n52VgOxUy0YD6nAqIYaG/wBxE61G2IIaAPohoTaOFa3jTh0Ll+JO+3ltzUCZ2vUaAngUQbatI 5XZLSo9Yl7hX5mMzSskZGv2HYUcit3+4nB4YxW5lEVVx1YFclWGAqEnv5nBpf3hikt3zf7958lHuN /nqOwMAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTktl-00ERLw-Ta; Fri, 24 Sep 2021 13:05:56 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiv-00EMCp-Am for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:43 +0000 Received: by mail-qv1-xf49.google.com with SMTP id h16-20020a05621402f000b0037cc26a5659so31505452qvu.1 for ; Fri, 24 Sep 2021 05:54:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=i7uodwfDnDtJuiTS9rXdXYViFWYsjMsbxF8IoeW8/W4=; b=qF3Vw4AceYdu1L0PZ66LcU5WQ4Q0vt8paDyr76neaIiU7PKYTbrZLZonVtIrIyRczb DU5SbrytlGhkBB1E+t5EqBtBbVrDhV4CyzU/EB8ANPxuMebR0qF/QjT8hLecRLcdlZgw A3lfSu8T99yQYr4bozntQ9WdVOsOPmEACfCRd0tfsFq6yXyqa5QjOxnbUZyAZx0Kx8ue PsVAFTS6kdE4EA4R/FbZWcnFQc+koZXhZ9EpQ3BLPMpc8IFYdDeyrGBI8gHWC3W4GLB6 P23Dlf8ES+p3OPOy3TiDL1x4KbFHLYAOvn+XGfAEiu3h6LwChGcFLWCRFl12kwRSLS+3 ctBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=i7uodwfDnDtJuiTS9rXdXYViFWYsjMsbxF8IoeW8/W4=; b=3z9YUqnrQ0ZuKHSe5S3y360wQrR1xtymYj8c0A9l4va8SlcU78HCPBNOIPCLIE6Qsz y1e2Hou6i0yz/ous24SmSUjKjbK9puyUldeIo+axWz9CKC3lUCpnVNYUF8KInmj6ek0h C7ildHHDP+oSOcUKJApCWwhmv0BzbJSM5EiWDpWtjg/nvTvFnucaglWN+QBVzjbGeZKV l7jnBfNPHVhy1QrXjmTw6RT0nKVnmNzkjp6pDQXfb4imXJWoOFUvdJWO8Av1xylWJBGr +002v9be49EySPyy2Xa6hzEEeAEdHS+Z6FCT4edvNr8fGG3EZR0rU5s/7hQL0sNQYKgb Q9hw== X-Gm-Message-State: AOAM5307zTsqQQB2by0MdU/ouXzoLyKoCYp4d3k9KJg9rv6ScNVQvhBW sLprty0+zL51duzXzQGcljVwNC4Nlw== X-Google-Smtp-Source: ABdhPJyK06uYrGrDSvLnw7I0yxv76y3TWXx+xyK98z+cTJ0e/eyHwuj1vePAtlEsCveTt5zBbE9Rk8ddCA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:47a3:: with SMTP id a3mr9927375qvz.31.1632488079831; Fri, 24 Sep 2021 05:54:39 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:47 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-19-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055441_441077_4DF111A2 X-CRM114-Status: GOOD ( 11.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __guest_exit only needs kvm_cpu_context (via the offset VCPU_CONTEXT). Only pass that to it, and fix it to ensure that it only refers to kvm_cpu_context rather than vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/entry.S | 7 ++----- arch/arm64/kvm/hyp/hyp-entry.S | 8 ++++---- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index e831d3dfd50d..996bdc9555da 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -99,15 +99,12 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) adr_l x1, hyp_panic str x1, [x0, #CPU_XREG_OFFSET(30)] - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // x0: return code - // x1: vcpu + // x1: ctxt // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack - - add x1, x1, #VCPU_CONTEXT ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 5f49df4ffdd8..704b3388c86a 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -71,17 +71,17 @@ wa_epilogue: sb el1_trap: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_TRAP b __guest_exit el1_irq: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_IRQ b __guest_exit el1_error: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_EL1_SERROR b __guest_exit @@ -100,7 +100,7 @@ el2_sync: 1: /* Let's attempt a recovery from the illegal exception return */ - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_IL b __guest_exit From patchwork Fri Sep 24 12:53:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24512C433F5 for ; Fri, 24 Sep 2021 13:08:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E4FFC60E8B for ; Fri, 24 Sep 2021 13:08:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E4FFC60E8B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=OIVQo4sN/R2D0JT3wvqWkCK/8CmcAfHX+I1SoP6XtHc=; b=kQE5lKc04kO9aAI1th4ix7eAwc 2zywgmvYmMverlzgUg8noJb36GjpxsljP5oQewmMdUrdpKiAW5JsPwjed+v5haNfiDOXYYLAwPLCH uZCtAqI6LyA2yrR7b6mG29g9r4lQgP3LCXuEyrgcYW2tJQ1Wvhj+LU5KJn+ukGG+u0NHOBdJ8boQN 0FrD2+SQyb+tad+NDLhhxX0JkeDs4zoXMtBoKtott/pM+fNU0uOQJJBjZLGAkJENJod8CUaDAgdBv PzumcdYAakqmwLKSG6jfh2nCco6fSUEB712IyyqBOZJ6fVL3kWWeJIVPVLgi2bJ4yNYqK3upymj3P t9qC3vAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkun-00ERjX-3k; Fri, 24 Sep 2021 13:06:58 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiy-00EME3-3R for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:45 +0000 Received: by mail-wr1-x44a.google.com with SMTP id m1-20020a056000180100b0015e1ec30ac3so7988114wrh.8 for ; Fri, 24 Sep 2021 05:54:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cwm3ilFA18I4Raa0wHdWLMhcRNJhs4J6mMJre+L2FYc=; b=je4bpz+fyu10bGVoeoi6Ty4HR9PtcbFNACuFRPupPpY0oUsa9yZ5kNeaIDZcv5QwY6 CSWs6Xj0h0oAHbTrPGjEnrc19tjy+775oHQ7l2R+CvM0oc6tMMC6N5r96PgmOx00xdqu /v1NV17MqFCcglC9pHP1Qbu34U8sSzUZjeotVPNeCTqztX641tRp+ekL3lADIVhvVZgi oJRGASUfqkRJNbKb/8Affuh+LcyRMJm2VOkoMhL747e5fqGybD2S3VnDAbcslVNjbnLL V/H05WEymc21vdiftZeiBopLIXqyb5J/DJcRcrST4LHoIikUl1cR/NrRfsAcvyRUluMG Flyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cwm3ilFA18I4Raa0wHdWLMhcRNJhs4J6mMJre+L2FYc=; b=q7drAIKbDG3vTl4cwPXrAtwPzIAeWlvjzFjrPO0e923L3qn7MAHqFUjsTFPD6QQQvn Rc+AU75cMuBIFpU2E8weArCEV51ZbSBnqMbwEPFkHPLXXmUc/EqdJA6tfEMwWnj+bkrP SZLxKVVauGYbC8dqJa2ws5mo5mP7BvLTZ8Odh4kRej9Lm8aXdwci4c48y0x4qrCfJlG6 4A9lUwRqZufaE2X3zcMdlbnImSukHhHqOij6TQnwpAGuo5m3hkESJp06sOhswrah9DZO l0s6YQlxL8n/7QM/TWJlLNP6t6XANaKuQZ9jEyzceCH5Qv/5y0yJHKAAFVTA8DZB+isV 9rqw== X-Gm-Message-State: AOAM532O2ntgeZid49SRwvL95iqsygRwkTsej5xDPlHMPx+iFLfpDphW 5jtkfxS+3gcPZjjyPqiG3SqId5AnyQ== X-Google-Smtp-Source: ABdhPJxkPO4x4XuJuQJSrvaVgGRJ8ovLrP7eMX40+ktCtEBAvOSg0HVW16gwA36lCjJ3bdNlORdKrvWRmA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c052:: with SMTP id u18mr1931373wmc.105.1632488081942; Fri, 24 Sep 2021 05:54:41 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:48 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-20-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055444_227050_3104F006 X-CRM114-Status: GOOD ( 12.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org get_loaded_vcpu is used only as a NULL check. get_loaded_vcpu_ctxt fills the same role and reduces the scope. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/entry.S | 4 ++-- arch/arm64/kvm/hyp/nvhe/host.S | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 996bdc9555da..1804be5b7ead 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -81,10 +81,10 @@ alternative_else_nop_endif SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack + // vcpu ctxt x0-x1 on the stack // If the hyp context is loaded, go straight to hyp_panic - get_loaded_vcpu x0, x1 + get_loaded_vcpu_ctxt x0, x1 cbnz x0, 1f b hyp_panic diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 2b23400e0fb3..7de2e8716f69 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -134,7 +134,7 @@ SYM_FUNC_END(__hyp_do_panic) .align 7 /* If a guest is loaded, panic out of it. */ stp x0, x1, [sp, #-16]! - get_loaded_vcpu x0, x1 + get_loaded_vcpu_ctxt x0, x1 cbnz x0, __guest_exit_panic add sp, sp, #16 From patchwork Fri Sep 24 12:53:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF58C433F5 for ; Fri, 24 Sep 2021 13:11:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6363360EE0 for ; Fri, 24 Sep 2021 13:11:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6363360EE0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Be8MxYoTqY4dIDJuINzd4iFwyqGxeB7HeIzfOhWmtZM=; b=R3J3NiuN2ThE3ecqjrN9BPuNAD +kiPu0A5ye9JJ2XDwD1KDjOojlKaoo5fsVQK1mASs1Y9W0cOlkCz2iGplcIn5dPyP+FfKqYmS9/D6 yuVRk8qgOM3gUpoYuLpSK5KRyX3iEep65auzpWIPO9RKPZmWGnZNou5/DMIitiI9D4rFJok5xdCVs Dpe+hBH+9555H6dis8ZmepMS48aXeYBCnNh5Bnx2nGu3j7q23XKw83qkeSTXBpusP5ifA5xgjxY5m +n69DufKquQcNVuJe/et4vCj7tMsYHoNXuz1SfCt1RW72XKcaG5ShV8ofiZJFMWb2nPP6YlVi22/4 QsancyMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkvy-00ESAi-HY; Fri, 24 Sep 2021 13:08:10 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkiz-00EMF8-Ul for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:47 +0000 Received: by mail-wr1-x449.google.com with SMTP id a17-20020adfed11000000b00160525e875aso465980wro.23 for ; Fri, 24 Sep 2021 05:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xhHjvocneo3OmYPl2aUhxqJ3BMqVUjdViv8A/KXRtOc=; b=J/C7Ta1yUem3gYRbil/zKRF0IxStHUFvTVNEvuRRLK6gctPS7/SC8wGX+7hdV8c7WF sjSMDuG0NjrV+/3ZNT0KciOwQfFrUiKwdKHNGGAYU85PPOeEbgyIDakYmiziwKsXibQN ZPrs0iDshzZjAJZ1kGyfQ1P565o2m8tAaHk2VwfnaXwwQCX46G347RPHq4hD4gmQ2LzT SLor++gvWIZW6zyDo3nnux9Tf+Km2hcZI0xela+/ti30GaLVylKlApErL4/QNfcgmrZL +j6MVi7XB5CJE/FaDfjRSKxRzHxJNULy6UVDbKmpOpKkV6+Jjj8ul6m6gZRAuZ+d1A50 4JLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xhHjvocneo3OmYPl2aUhxqJ3BMqVUjdViv8A/KXRtOc=; b=PBZgfKe1d7d6Y7DTZk44bRchm0LHlPz8lz8bh1HDD4MWModFKf6/wf0zp8pK6RvNw1 2JwWZBTr/1/GUfqDKqoN7brHEfvvFFVXT0gnqK7pYVXKz5ASWtnw8OSVO6UuEHay+Yv8 HzrgWpeJ1Kv0G/EoOcCY0ngh3YKJQi1SNdm5Wbi8tyuWlCio67kDnE58eCV0Up3svBT4 mgVSqm/t/zbKMWwYFastQWEAp2EKGDmSnUgIw44b+QuICmtyw50HUsYEbUXCidCvaUzd RQtVeqQBfKrb88thtTFk708VyJ200/1C+3YhRXGfEz37G0kwRSnKmBN2W23VdT84PZEo SbxQ== X-Gm-Message-State: AOAM530rmP5j3kpVkh2FsNGP2t4p+WEsz3B2a2J859wMA0qUuXfZccVj 7eOwyGCsJwlYNS8DG+DBKy0AKSelUA== X-Google-Smtp-Source: ABdhPJxU6dLnsCPB/2TUUhu7b+8Tafoe11ACEILnegBEKHhcSyod6McampCuDXPyU43B2yIY76HX2EhGMA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6000:1a89:: with SMTP id f9mr11219965wry.19.1632488083973; Fri, 24 Sep 2021 05:54:43 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:49 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-21-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055446_087836_44E6F8E1 X-CRM114-Status: GOOD ( 13.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to prepare to remove __hyp_running_vcpu, add __hyp_running_ctxt and __hyp_running_hyps to access the running kvm_cpu_ctxt and the hyp_state, as well as their associated assembly offsets. These new fields are updated but not accessed yet. Their state is consistent with __hyp_running_vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 13 +++++++++++++ arch/arm64/include/asm/kvm_host.h | 19 ++++++++++++++++--- arch/arm64/kernel/asm-offsets.c | 2 ++ arch/arm64/kvm/hyp/entry.S | 2 +- 4 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 766b6a852407..52079e937fcd 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -271,6 +271,19 @@ extern u32 __kvm_get_mdcr_el2(void); .macro set_loaded_vcpu vcpu, ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + + add \tmp, \vcpu, #VCPU_CONTEXT + str \tmp, [\ctxt, #HOST_CONTEXT_CTXT] + + add \tmp, \vcpu, #VCPU_HYPS + str \tmp, [\ctxt, #HOST_CONTEXT_HYPS] +.endm + +.macro clear_loaded_vcpu ctxt, tmp + adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp + str xzr, [\ctxt, #HOST_CONTEXT_VCPU] + str xzr, [\ctxt, #HOST_CONTEXT_CTXT] + str xzr, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro get_loaded_vcpu_ctxt vcpu, ctxt diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4b01c74705ad..b42d0c6c8004 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -228,14 +228,27 @@ struct kvm_cpu_context { u64 sys_regs[NR_SYS_REGS]; struct kvm_vcpu *__hyp_running_vcpu; + struct kvm_cpu_context *__hyp_running_ctxt; + struct vcpu_hyp_state *__hyp_running_hyps; }; #define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu -#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu) +#define set_hyp_running_vcpu(host_ctxt, vcpu) do { \ + struct kvm_vcpu *v = (vcpu); \ + (host_ctxt)->__hyp_running_vcpu = v; \ + if (vcpu) { \ + (host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \ + (host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \ + } else { \ + (host_ctxt)->__hyp_running_ctxt = NULL; \ + (host_ctxt)->__hyp_running_hyps = NULL; \ + }\ +} while(0) + #define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu -#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL -#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL +#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt +#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps struct kvm_pmu_events { u32 events_host; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1ecc55570acc..9c25078da294 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -117,6 +117,8 @@ int main(void) DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1])); DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); + DEFINE(HOST_CONTEXT_CTXT, offsetof(struct kvm_cpu_context, __hyp_running_ctxt)); + DEFINE(HOST_CONTEXT_HYPS, offsetof(struct kvm_cpu_context, __hyp_running_hyps)); DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); DEFINE(NVHE_INIT_MAIR_EL2, offsetof(struct kvm_nvhe_init_params, mair_el2)); DEFINE(NVHE_INIT_TCR_EL2, offsetof(struct kvm_nvhe_init_params, tcr_el2)); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 1804be5b7ead..8e7033aa5770 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -145,7 +145,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // Now restore the hyp regs restore_callee_saved_regs x2 - set_loaded_vcpu xzr, x2, x3 + clear_loaded_vcpu x2, x3 alternative_if ARM64_HAS_RAS_EXTN // If we have the RAS extensions we can consume a pending error From patchwork Fri Sep 24 12:53:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62C2CC433EF for ; Fri, 24 Sep 2021 13:11:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 34D1160F12 for ; Fri, 24 Sep 2021 13:11:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 34D1160F12 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8XyKqG5mC2Kpp7h/aK0/kA+mfLknxQV79qg4TIIpkJo=; b=zakhWjXsvq492ej01nrWrqzqME W9njr+JzqAeqz1/kHIh+hF2Bk42Gcd9hIuxykir9ojzdlNkXBWfurnGveX4VR6nU/0OjPNQHgrQM2 F+4REPlPZfJDEoamJn/3ItwFNS4G5nn5p7Oy9tdZI9RdKbIcvgCEdQLsKpEU/KtVlKKYCOfQZmHJS EuIgFwyupTIujKzTmqd5HM/WVhUiW5UHyr2fh+ADaqu5+EMM0RIxI4TlRgdHfb1xDqcORXMn31s1e oJMbiX3zYNCpDcBdBI2ZVP6V/6PTRB5FdYjWEEsnlI9pfeUyJoaklzsvTEsNfXoKjDbJfrdRCbkom K2ItBnSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkx6-00ESb6-9o; Fri, 24 Sep 2021 13:09:20 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkj1-00EMG8-SJ for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:49 +0000 Received: by mail-wr1-x449.google.com with SMTP id a17-20020adfed11000000b00160525e875aso466040wro.23 for ; Fri, 24 Sep 2021 05:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q+8n29A2U0S8NhAXycyXeyNJmHo/u/5CJ1rE8AvB4D8=; b=cxpNbynNFzYUxpl57MPUhLzZUKq20M6Q1Tq3Uvg3l5O/+YFgKxm+t3GqpLYgANuefD FPUjOhEFGEJy43xXVn6ZFNY0LFs4Uy6mjCyxHhS8Zo/wb78rOmhuQQJ9vwDTZv8/11dw DXLgiQ4WjLw2W/KPOPTTtiK/9s1sIFiRPMuNAefBb6r1alfL0vFOz1pUq+Crz1+Tc1Wf FYB+yadmLGksdNEeGyE+zllj0q2xhraWVXxhQ1ab7nRJfm8KrZox2fEDFaKyRLw+x4w4 r49qzs8672CUxLgy2rljpqGDb7uuyXtfH+mlUs38aHU7pwRKXgh3dZ02yKB2BTwkyoQ8 Ln1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q+8n29A2U0S8NhAXycyXeyNJmHo/u/5CJ1rE8AvB4D8=; b=fHsQwl3P2TNlAN7/taBRGBIBfveCB5PhfG2IQsBrmWGOnNGrwU32QJPXpvDEjmIoOd 8GeqEezHOjv+S/CePxNqkTmBAL7QxckcC0bEPvdP+HjKPMrfkZX381yhf3xSgDgkUWGa dv3D306a1ZmgusyCShf2QzZfc0/lpUGiO29wtM9LZux7HiV33fXm14aJFajVx2GIensD lKPEMwZXvwk0/Twf5//QdIXv47dehJMFnklUzJheevhz3lNkelzm58CJnj6SlJyKjztO Up6C90Kgt2AJbSlm6FFpCC9Lon3IjwQ5nBwtdGLOsIOoLNQLIpbQz2Kk4xRxX9ZFhlyb wZRg== X-Gm-Message-State: AOAM531TeGxjM6fcKJUU1DDkRTnjVywdPejYF9ofnOLFdzX/cobZZg6F VSZFA8EKAX8qrHnoHBFZLlP1fwqSUg== X-Google-Smtp-Source: ABdhPJwSaHPVCm7TSNiTJZE78dylDVwwxbZ3U77YPIEY25dKbKZj9TIDo4HPPu5CDTfj+SDFth6yZB/QeA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:adf:ecd2:: with SMTP id s18mr11112114wro.99.1632488085955; Fri, 24 Sep 2021 05:54:45 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:50 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-22-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 21/30] KVM: arm64: transition code to __hyp_running_ctxt and __hyp_running_hyps From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055447_963189_A322F6A0 X-CRM114-Status: GOOD ( 14.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Transition code for to use the new hyp_running pointers. Everything is consistent, because all fields are in-sync. Remove __hyp_running_vcpu now that no one is using it. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 24 ++++-------------------- arch/arm64/include/asm/kvm_host.h | 5 +---- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kvm/handle_exit.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/host.S | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +--- arch/arm64/kvm/hyp/vhe/switch.c | 8 ++++---- 7 files changed, 14 insertions(+), 36 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 52079e937fcd..e24ebcf9e0d3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -246,31 +246,18 @@ extern u32 __kvm_get_mdcr_el2(void); add \reg, \reg, #HOST_DATA_CONTEXT .endm -.macro get_vcpu_ptr vcpu, ctxt - get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] -.endm - .macro get_vcpu_ctxt_ptr vcpu, ctxt get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_CONTEXT + ldr \vcpu, [\ctxt, #HOST_CONTEXT_CTXT] .endm .macro get_vcpu_hyps_ptr vcpu, ctxt get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_HYPS -.endm - -.macro get_loaded_vcpu vcpu, ctxt - adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + ldr \vcpu, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro set_loaded_vcpu vcpu, ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp - str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] add \tmp, \vcpu, #VCPU_CONTEXT str \tmp, [\ctxt, #HOST_CONTEXT_CTXT] @@ -281,21 +268,18 @@ extern u32 __kvm_get_mdcr_el2(void); .macro clear_loaded_vcpu ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp - str xzr, [\ctxt, #HOST_CONTEXT_VCPU] str xzr, [\ctxt, #HOST_CONTEXT_CTXT] str xzr, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro get_loaded_vcpu_ctxt vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_CONTEXT + ldr \vcpu, [\ctxt, #HOST_CONTEXT_CTXT] .endm .macro get_loaded_vcpu_hyps vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_HYPS + ldr \vcpu, [\ctxt, #HOST_CONTEXT_HYPS] .endm /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b42d0c6c8004..035ca5a49166 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -227,15 +227,12 @@ struct kvm_cpu_context { u64 sys_regs[NR_SYS_REGS]; - struct kvm_vcpu *__hyp_running_vcpu; struct kvm_cpu_context *__hyp_running_ctxt; struct vcpu_hyp_state *__hyp_running_hyps; }; -#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu #define set_hyp_running_vcpu(host_ctxt, vcpu) do { \ struct kvm_vcpu *v = (vcpu); \ - (host_ctxt)->__hyp_running_vcpu = v; \ if (vcpu) { \ (host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \ (host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \ @@ -245,7 +242,7 @@ struct kvm_cpu_context { }\ } while(0) -#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu +#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_ctxt #define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt #define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9c25078da294..f42aea730cf4 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -116,7 +116,6 @@ int main(void) DEFINE(CPU_APDAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1])); DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1])); DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); - DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); DEFINE(HOST_CONTEXT_CTXT, offsetof(struct kvm_cpu_context, __hyp_running_ctxt)); DEFINE(HOST_CONTEXT_HYPS, offsetof(struct kvm_cpu_context, __hyp_running_hyps)); DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 22e9f03fe901..cb6a25b79e38 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -293,7 +293,7 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index) } void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr, - u64 par, uintptr_t vcpu, + u64 par, uintptr_t vcpu_ctxt, u64 far, u64 hpfar) { u64 elr_in_kimg = __phys_to_kimg(__hyp_pa(elr)); u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr; @@ -333,6 +333,6 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr, */ kvm_err("Hyp Offset: 0x%llx\n", hyp_offset); - panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n", - spsr, elr, esr, far, hpfar, par, vcpu); + panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%016lx\n", + spsr, elr, esr, far, hpfar, par, vcpu_ctxt); } diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 7de2e8716f69..975cf125d54c 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -87,7 +87,7 @@ SYM_FUNC_START(__hyp_do_panic) /* Load the panic arguments into x0-7 */ mrs x0, esr_el2 - get_vcpu_ptr x4, x5 + get_vcpu_ctxt_ptr x4, x5 mrs x5, far_el2 mrs x6, hpfar_el2 mov x7, xzr // Unused argument diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 12c673301210..483df8fe052e 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -272,14 +272,12 @@ void __noreturn hyp_panic(void) u64 elr = read_sysreg_el2(SYS_ELR); u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; - struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = get_hyp_running_vcpu(host_ctxt); vcpu_hyps = get_hyp_running_hyps(host_ctxt); - if (vcpu) { + if (vcpu_hyps) { __timer_disable_traps(); __deactivate_traps(vcpu_hyps); __load_host_stage2(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 14c434e00914..64de9f0d7636 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -203,20 +203,20 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; - struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; struct vcpu_hyp_state *vcpu_hyps; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_ctxt = get_hyp_running_ctxt(host_ctxt); vcpu_hyps = get_hyp_running_hyps(host_ctxt); __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); - panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n", + panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%p\n", spsr, elr, read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR), - read_sysreg(hpfar_el2), par, vcpu); + read_sysreg(hpfar_el2), par, vcpu_ctxt); } NOKPROBE_SYMBOL(__hyp_call_panic); From patchwork Fri Sep 24 12:53:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE6ADC433EF for ; Fri, 24 Sep 2021 13:12:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 84D2160F39 for ; Fri, 24 Sep 2021 13:12:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 84D2160F39 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=L2W1Mq4NJcPYTQIdp/5UgoIJSIO02MQyUYCbC0P3S/I=; b=AAalKiJKZB4y2qeF3Nzki4LLsU rImtkCUUsgt6ED/+0tT2dIrG0UUyxoDclSrfaiI2WeRqkVncEgtsBWtvYLtP8drDEe+u6eA9zNmRA 4hsOfP4ghCw7hsvda54+SIy2Yy8kysbIKdA/+Bi+NIUHcktWyLf7gAudwIhlE1olgtm2YuF3wCgEM Bs4NXZ6a3KgmFtu59fvnMsxWBunX3I8tWwonZkIVvEeBczKHCD00RhuVznAggo8aATOViTR2d2/oG 3JhCmxbe8cLogrSgypCfWAWELE5kgBOMCJ01NHTcm0TOOGxy2pQDHIckvxSfdcDCSROfbtp2RNoh4 WuGXvIXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkyG-00ET62-Q3; Fri, 24 Sep 2021 13:10:33 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkj3-00EMH4-M5 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:51 +0000 Received: by mail-wr1-x44a.google.com with SMTP id h5-20020a5d6885000000b0015e21e37523so7980497wru.10 for ; Fri, 24 Sep 2021 05:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8jEuEzlbcfPCVXhyDainIX5VC+9wUmsaXxH+UUkLWws=; b=aCPwNQiwMH+K/jPh8d1XnhlEyPYKLFzJsCHQMTxy9s76o9j/JzRfStpJOLN3cyALso 71rB9qEsFuW8yLwe3l7ZVNrQHiU4+FZctlLrDGu2EzVLM8uH1Jn6lSMCSPydgx25l1qz ZrhgopWhzr0lIfWQEsKBnB8GIwhByYUOd+Fe1L9voSa0eiIszncBKzGRrIX484+G2qy9 OiCZsLo27TmIn/vHORUiQbNBsOI1I5YdimAckjJQPa5+AZsKiLiwn3N9iCDNw/yoHoiZ mWntjHdWNY3zW4wj7uEK3gz6ifIk/kUymefHxkNkIA1EmTjAgmzS222TFyT0yetEQ4aZ l3QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8jEuEzlbcfPCVXhyDainIX5VC+9wUmsaXxH+UUkLWws=; b=trg42ufdiXSWzoHdFV4niC/fe2q1yV2a+DE+1ZrgIE1WFY2RBszSO8lwJA78aAtSRe m1PHEaGKBssgKaBWPwgmUhLt1wJplAXKQjd8vks1rHG8ZFnLZ7+x2zxF0cGh8ItCi22o GO7wy+8JrwC0sONtgKy9UoZEIFAihbWN9n1Bio5Lz+ZrV6BMIRXIAGESqPgAcvx1BWn0 uKRbNoT7Ry6/RfQkIn19YQyO38+ZZgYHNQjfLUn2eNa6IS2muUn8LyAOvjEq/oA/w4Q/ PxRCNSu4qzTx5SdJE57FSt3zPUySyzUu/mSW339u8GUbA6Vc3wwvmMJ2rZT/OMnVRujX waQQ== X-Gm-Message-State: AOAM531SdAoYZQ803cd934w6pEtFuJ9FtrmhphXEucdyFKGoxf4l7YN1 b9w2eHzOqZs7UIUVraW6L2FCPMMCaw== X-Google-Smtp-Source: ABdhPJxUJbh/AE6W9VRAVaudBBE60oECmmozdjammP+VVHJRusaXJiHlbC7xhr44BgjTe/fYyj9xbkoZcQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c048:: with SMTP id u8mr1874492wmc.113.1632488087907; Fri, 24 Sep 2021 05:54:47 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:51 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-23-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055449_779628_A126D124 X-CRM114-Status: GOOD ( 13.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org guest_enter doesn't need the vcpu, only the guest's kvm_cpu_ctxt. Reduce its scope to that. With this commit, the only state in struct vcpu that the hypervisor needs to save locally in future patches is guest context (kvm_cpu_context) and the hypervisor state (vcpu_hyp_state). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/entry.S | 10 ++++------ arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++++- arch/arm64/kvm/hyp/vhe/switch.c | 5 ++++- 4 files changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index b379c2b96f33..c5206e958136 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -100,7 +100,7 @@ void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps); void deactivate_traps_vhe_put(void); #endif -u64 __guest_enter(struct kvm_vcpu *vcpu); +u64 __guest_enter(struct kvm_cpu_context *guest_ctxt); bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 8e7033aa5770..f553f184e402 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -18,12 +18,12 @@ .text /* - * u64 __guest_enter(struct kvm_vcpu *vcpu); + * u64 __guest_enter(struct kvm_cpu_context *guest_ctxt); */ SYM_FUNC_START(__guest_enter) - // x0: vcpu + // x0: guest context (input parameter) // x1-x17: clobbered by macros - // x29: guest context + // x29: guest context (maintained for call duration) adr_this_cpu x1, kvm_hyp_ctxt, x2 @@ -47,9 +47,7 @@ alternative_else_nop_endif ret 1: - set_loaded_vcpu x0, x1, x2 - - add x29, x0, #VCPU_CONTEXT + mov x29, x0 // Macro ptrauth_switch_to_guest format: // ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 483df8fe052e..d9a69e66158c 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -228,8 +228,11 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __debug_switch_to_guest(vcpu); do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + /* Jump in the fire! */ - exit_code = __guest_enter(vcpu); + exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, vgic, &exit_code)); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 64de9f0d7636..5039910a7c80 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -142,8 +142,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __debug_switch_to_guest(vcpu); do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + /* Jump in the fire! */ - exit_code = __guest_enter(vcpu); + exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, vgic, &exit_code)); From patchwork Fri Sep 24 12:53:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EDCCC433EF for ; Fri, 24 Sep 2021 13:13:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D234860F41 for ; Fri, 24 Sep 2021 13:13:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D234860F41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Jpq9brkY0ktr19mkby7HCerPc8NEO3nXwvQGhJZQQCw=; b=xao+EpLCBKYwYESvO/ePHWuuuc mWJ+CrTm+VX7JGuXwWgjQ73UCtJ+LVxe3n1d81u2St55aA3RNXmijfJaOhdZVrKbvwyoxSagoWpW5 yZqq+cXus7dKotKRCXf5SoCaK2hqdFIYsQlbjyMbzNuKeCCGd9xSpCL3dgHpoftM9Y380WBl3CCOu cctokQlK9PyVP5y8/hsivwyllrLwjIIRwiJTOvo5vjTocoiV/PHtDFXTtL9m6xDi4fqyXaDPLFTtn KJwGU1W1VLCkO/c8nw/s+HnLpPKg63ozTjBFslWqGb4wbrcmHBY5yvS8wcGhRKkdYjS8XmyzUSzRa HZdMxn9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkzL-00ETXr-2M; Fri, 24 Sep 2021 13:11:39 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkj5-00EMI7-AN for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:53 +0000 Received: by mail-qt1-x849.google.com with SMTP id q24-20020ac84118000000b002a6d14f21e9so1409876qtl.9 for ; Fri, 24 Sep 2021 05:54:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ah9gXDd6fJvOzz9kx2O2leuWRmK2UF+FaL+3WFxvTJ8=; b=VWQcPIt1vlcNarP43ow3DnD94RzA9qM31BLx/7ufU7rLlWKPLscbJUa1hwytIvKECH KyGv1/cyYZsNUp9XXYgMuaT4w7wRqyZE7Xvr9gKadfTjAGm2B3nnm29Lrhl4Hc93kE0D +ntHaq8stYGqAO/6okY/E+mBNRenkNvqh244JfNr3K1Rc2gURhFJjsUO3h5RHXIHv1b4 8rkalsaPpIIU8Yd5QdW9R93ZSOemAGnVlhMTDGw+ZPnykoTrW8jBQ4FokcJo3BvXf2xL hJEk/ZFdtu2bW35QN2sNABKsx62NpxxXmJsrGk8eO6dl3UOR2lzX34pgJUTeLH59Smjp Lr4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ah9gXDd6fJvOzz9kx2O2leuWRmK2UF+FaL+3WFxvTJ8=; b=lamaKC7W0TUeuPY9+TInO89Hjv2L5mwD/sRncxldqF2ZjD0T61gwLCAh/Hy744GLFQ 7k9hjvnzDL56VuMk6T07pLquEgFws2qBQO31PfpuUn5gubnZtd5HeHehHLG8NXv+BkFM 0BOx59AGFN6fmxF9QE9VOGcwIW54//nT4lriWHhBiTizujWxXLQKRVvZQoqX5+qK0MP3 gIpBK7OfJGcHJ1PTmma8tgUQnZKRVDi0NQoRjIQDhs3f4lWO0crWsAsrElT+UyNhRiyf plmPaN3Bvp8uzzinyjDzyFR/1UEZxvDrW1gFJTF16hnKegKB/vFtmrJ5qx4+JLaC5usk BEPA== X-Gm-Message-State: AOAM530g8XglF4J1VNmlWSkd1WpSGoyIg+TWsjhuZC7MdlIksRs7c5hm kKlJglsWt30NgL4Z5B96dAaJci7c/g== X-Google-Smtp-Source: ABdhPJx0MGh1esHygyGmugyNFtFbLjJ17xcPDf/Zl3jsEQ9VM2CtXd7GdNQW4w6TBVnK0kcjEDJ2MCAADA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:914f:: with SMTP id q73mr9847672qvq.39.1632488089835; Fri, 24 Sep 2021 05:54:49 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:52 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-24-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055451_438443_93462B09 X-CRM114-Status: GOOD ( 16.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These local variables were added aggressively. Remove the ones that ended up not being used. Also, some of the added variables are missing a new line after their definition. Insert that for the remaining ones. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/exception.c | 5 ----- arch/arm64/kvm/hyp/include/hyp/switch.h | 9 ++++----- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 2 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 1 - arch/arm64/kvm/hyp/vhe/switch.c | 3 --- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 3 --- 6 files changed, 6 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index a08806efe031..bb0bc1f5568c 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -59,31 +59,26 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { - const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); } static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 44e76993a9b4..433601f79b94 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -37,6 +37,7 @@ extern struct exception_table_entry __stop___kvm_ex_table; static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); + /* * When the system doesn't support FP/SIMD, we cannot rely on * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an @@ -55,8 +56,8 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; @@ -65,8 +66,6 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point * register accesses to EL2, however, the ARM ARM clearly states that @@ -220,8 +219,8 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), &ctxt_fp_regs(vcpu_ctxt)->fpsr); @@ -395,7 +394,6 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; @@ -428,6 +426,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index df9cd2177e71..b750ff40a604 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -160,6 +160,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; @@ -179,6 +180,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d9a69e66158c..b90ec8db5864 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -37,7 +37,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu_hyps); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 5039910a7c80..7f926016cebe 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -34,7 +34,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu_hyps); @@ -168,8 +167,6 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; local_daif_mask(); diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 1571c144e9b0..1ded8be83c5a 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -64,7 +64,6 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -99,8 +98,6 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1596C433EF for ; Fri, 24 Sep 2021 13:14:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9073E60F41 for ; Fri, 24 Sep 2021 13:14:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9073E60F41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ATM0GG4nseSX8mZcClye0J8Sa7EvB6Ex9hLD733Onuk=; b=AtEaOpY4U9vPH+b0mOZT1rsR1H pUBXe62sAHOhsEEEh+F5Oo9pIuW1Yn1ImUpohWQ+WJAqpZJx6PxhGhHvjwMLqKOdhXhIt9fHk0rY2 0yPsfYI1qB/jlGO8rcZRHznJb7RT053vULS89hXtPmdNbZ2SC/qdbwt10Dns36QL8aSa2pywBA2Yx 8l0MGvTRz6qcdE6EOT3/ZmHRn1JGYo0XdqhW5BJPAJK9FKUN4gau/BsUT3BEebvw0D02VVyNg6bMW 9bJtnFBL3bPAkyOqjXjr7abLern/xDs6loqeomi8BZ3+YfRd+RePrOfhooams2dhSxAx2HDiHo9X3 8AdWIjlw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl0H-00ETu6-Gp; Fri, 24 Sep 2021 13:12:38 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkj7-00EMIz-CB for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:54 +0000 Received: by mail-qk1-x749.google.com with SMTP id k12-20020a05620a0b8c00b003d5c8646ec2so31749275qkh.20 for ; Fri, 24 Sep 2021 05:54:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AJoOKnLyUcMeZ4bJL08afGQB1XmHWoCmbvvxsT90R1w=; b=TMcksozilNhor18k73HxG4gyzfpGIMQctnpgCoaFfUQvCdISS/VjwUjfT8F/wp9IsG PSTiJU2S+0umLVLDZpYsMf1X3pMpaNn1K70khBdD2PsgT9nUFaSWbqDDz8wWIAjd3R4y /7Ix41f24yh5cDWqxan5dgEUP+g+Bo06EILO7FWcopUsktqWgXUJKfnWaJu6p5D5A487 lIeMXBorGqKldcdOiUFmwoM5kyBRg8Qc7dDf/ZIiCMR3KJ7Uus5G7B0UWsPVzQESkXiF i4fjNfKyLfq1Hj9F3RZ2SsYLfw6L37gMokU2uTeaiLZVuxDA75sy3iTlTonCGSWvpuEB lnGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AJoOKnLyUcMeZ4bJL08afGQB1XmHWoCmbvvxsT90R1w=; b=sw+eFwIeULHxKgbL597HppNhOeAJ6ab0CM5gThLzDdnnxss5nawKQBdvhx5ymdXu5M ui47CPMXNiNKq92d5VEiPTOWiuubyoKMybGUCiKKCMWIf123s3NbL74e1OM6RFe/WQA8 srAVr+yS7uT7FN1EtaWXLDkX8BCTWbpHu9F8Y5HkFRUPI/cw7/6L5dgnba6/Qm7dAMrm eyqEn2Ne/Y5j6pEsIQ0dY/2qKO+pJn9PC7uVT2Eza5Al1khOFSm3eAk81PTG4WSJhXdd XjwwN+vBJBbWEx4rq7K+aTPXmUo2Gv7pq++J6UwASKHvZ1ZSXHdh0RrWPq8NGIsQK7ro lSZQ== X-Gm-Message-State: AOAM530pxwg6s3jo4PHK5m6ODGnbbz22pBLv0Uz018PcPqwGrC8o0Wyf aIev0S4GlDAhI6zs6QLmlGiFncsCzw== X-Google-Smtp-Source: ABdhPJwRKEEdsILouY64mZAty9UzcrpjyOPrwEYFLAfse6nGd0/Ta0E8nqaALjj/ZCs3Dfp6mLzP+gYXZA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:492:: with SMTP id ay18mr9625511qvb.4.1632488091905; Fri, 24 Sep 2021 05:54:51 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:53 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-25-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 24/30] KVM: arm64: remove unused functions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055453_447453_DF6DCA15 X-CRM114-Status: UNSURE ( 8.33 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __vcpu_write_spsr*() functions are not used anymore. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/exception.c | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index bb0bc1f5568c..fdfc809f61b8 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -67,21 +67,6 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } -static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); -} - -static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); -} - -static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); -} - /* * This performs the exception entry at a given EL (@target_mode), stashing PC * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. From patchwork Fri Sep 24 12:53:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A674C433F5 for ; Fri, 24 Sep 2021 13:15:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D8918600D3 for ; Fri, 24 Sep 2021 13:15:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D8918600D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=2OU7IOR/2Oy5R0hXbJCgbXy0u1WeNr6xlVp9JTl+nPw=; b=BD38uBJcV/3ZtQuW1cmnEMc46a HCHgU2YPHj6Q/4wcTqMJG417xopl1Tzw8IehdQs/KdGz/iIq02Ls3u2Ru8yxujvTTgyoLoXuOdqdg dfEvONeViAOgONyEZzRXERd/4XubMUfna5ySAp0G/3EIAERcAZMkRda4xAza3b/ADYhDQAM/mtUZW dix+LVTpTr3PRNHYPgTIt/ZN54ehJHtPB6A1bIBuxamMP8GBB2PlutIuG8yTVErGJ6sq+mwKRhGtF dlteNfKSMKvIbnWN4hHbiom2F3oI/Th+ReLsm8i5c+AQW9tC7iMXJ0u2dwij823GuYqg7U5TiLVXy JXhBNOhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl1B-00EUGZ-3E; Fri, 24 Sep 2021 13:13:33 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkj9-00EMJv-D3 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:57 +0000 Received: by mail-qk1-x749.google.com with SMTP id d7-20020a05620a240700b0045da3bf3509so19813138qkn.19 for ; Fri, 24 Sep 2021 05:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=h7avwCelVLDXrW65ivowhx5L97uC6oG1q5eldj7v03E=; b=HSj7gZGfE1Et2vmg3ht8IP44LkLd1mywKH9lU+M+SaqxPAhQomtoiMQDSdpuE1jlH0 VjMgAJVQhNWJ6IR0dBYuKqen9a8e6ZwcErT6sKmrbHkSet6C+kMNx2fpZCShgK7tJ8i5 Tfkk+GKBNixa2P/T7o4dn5RBzfOEocv2ZLWrqxfAOv+pXLMVKoRwTppWAkyyYZsYu5JF HdwISigqSEVHD9UFMlP8vbCfGs7HYpwttcpvRskdAZxLkbT4Mu8udkC7KtNx1ZUP139m RIxeaVjv2wRJ0JACk4h1lm/7OGP26Nfp3hla/AqHq6Sdwt8v4aXZ36GDhNxuFubJoe7E mwCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=h7avwCelVLDXrW65ivowhx5L97uC6oG1q5eldj7v03E=; b=I0f+nH1t6BMEY0reBlu3t8RLVMT+99Fptqmw4WfsbJ+Q4JKJbh3zvxOSRxxvxRY46y k4i05D0YUxUIp08JKIZftkt0B1V4NQKyjnnipxeIvOZjUoNthEfpHMdxy7qK4JCTx3qi LPcMC0BW5SJuYl6ysUAW2zLk8g/yQ3o2ztEj0JVit/iM6U/cIHWTgwPlazkpAzkQQ3VI RvtSygJykbhzdu1Qhg7XcczbCGx1Qh+kqY4gqwbP6TxTf8AYCzd93H5dC3F6U0AEJnXo 9jKaIaWUTL3PLPO5Te6E/7wD4AoQDAyK6BduVrnW2hGjM7BTPMk1cEV0dO11HYhmDB6x hiUg== X-Gm-Message-State: AOAM531jbTlEM839BWRoyn/cQth+PE1cw9pqON/I4W2xNv9uNTB7PH7Q IplbUStVBUvNJKo0qWfNt/l/QuLISw== X-Google-Smtp-Source: ABdhPJyL0t+coFpdVZMGEPsDbytWRe779HP4gPBcO06ejSv04CQaDTyhghz78sdKCUcnBSSiAMc7iJUGkA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:d989:: with SMTP id y9mr9535459qvj.67.1632488093954; Fri, 24 Sep 2021 05:54:53 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:54 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-26-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055455_501802_59108BF0 X-CRM114-Status: GOOD ( 21.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Split kvm_run() for protected and non-protected VMs. Protected VMs support fewer features, separating it out will ease the refactoring and simplify the code. This patch starts only by replicated the code from the non-protected case, to make it easier to diff against future patches. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 119 ++++++++++++++++++++++++++++++- 1 file changed, 116 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index b90ec8db5864..9e79f97ba49e 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -119,7 +119,7 @@ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) } } -/* Restore VGICv3 state on non_VEH systems */ +/* Restore VGICv3 state on nVHE systems */ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { @@ -166,8 +166,110 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } -/* Switch to the guest for legacy non-VHE systems */ -int __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* Switch to the non-protected guest */ +static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) +{ + struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state; + struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; + struct kvm *kvm = kern_hyp_va(vcpu->kvm); + struct vgic_dist *vgic = &kvm->arch.vgic; + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; + u64 exit_code; + + /* + * Having IRQs masked via PMR when entering the guest means the GIC + * will not signal the CPU of interrupts of lower priority, and the + * only way to get out will be via guest exceptions. + * Naturally, we want to avoid this. + */ + if (system_uses_irq_prio_masking()) { + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + pmr_sync(); + } + + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; + set_hyp_running_vcpu(host_ctxt, vcpu); + guest_ctxt = &vcpu->arch.ctxt; + + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + + __sysreg_save_state_nvhe(host_ctxt); + /* + * We must flush and disable the SPE buffer for nVHE, as + * the translation regime(EL1&0) is going to be loaded with + * that of the guest. And we must do this before we change the + * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and + * before we load guest Stage1. + */ + __debug_save_host_buffers_nvhe(vcpu); + + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + * + * Also, and in order to be able to deal with erratum #1319537 (A57) + * and #1319367 (A72), we must ensure that all VM-related sysreg are + * restored before we enable S2 translation. + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_state_nvhe(guest_ctxt); + + __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); + __activate_traps(vcpu); + + __hyp_vgic_restore_state(vcpu); + __timer_enable_traps(); + + __debug_switch_to_guest(vcpu); + + do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + + /* Jump in the fire! */ + exit_code = __guest_enter(guest_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + + __sysreg_save_state_nvhe(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(); + __hyp_vgic_save_state(vcpu); + + __deactivate_traps(vcpu_hyps); + __load_host_stage2(); + + __sysreg_restore_state_nvhe(host_ctxt); + + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) + __fpsimd_save_fpexc32(vcpu); + + __debug_switch_to_host(vcpu); + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_restore_host_buffers_nvhe(vcpu); + + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + + /* Returning to host will clear PSR.I, remask PMR if needed */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQOFF); + + set_hyp_running_vcpu(host_ctxt, NULL); + + return exit_code; +} + +/* Switch to the protected guest */ +static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); @@ -268,6 +370,17 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } +/* Switch to the guest for non-VHE and protected KVM systems */ +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + vcpu = kern_hyp_va(vcpu); + + if (likely(!kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)))) + return __kvm_vcpu_run_nvhe(vcpu); + else + return __kvm_vcpu_run_pvm(vcpu); +} + void __noreturn hyp_panic(void) { u64 spsr = read_sysreg_el2(SYS_SPSR); From patchwork Fri Sep 24 12:53:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 859EBC433F5 for ; Fri, 24 Sep 2021 13:16:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3CEE460F48 for ; Fri, 24 Sep 2021 13:16:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3CEE460F48 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/Rdc4HsLJjVSp1WGPkhNQsE4UPOsINKK+9RCqL5Mow0=; b=3Gu1H5OikMd8ZOFgIR9PcIDkiJ BzvxvNNLREDeGycwM1HnZSJUUXyerl6m90X8QRf9cab2mVA+nw+ENMhfk3b4vqBbEb2LxROpkf4df LmolA9YTJLTy8/iT+y4dgi8eM9SLBxTGBltoU9L5HXonByqNBDcnUFxUbfLt0dP2yo1zkafcV2xHN IROGXNSN1JULIFf7nadbvst1I7YucVsLWDpu+HGRC62Kaz7bMiI0OInVcRk9UlqnQ5ogRNbrB0G4g v89eg6YcfYz9/LhIdzTDUf/KBqsbCOnvcC6/Rzdzi7OfnBU9KKoYAVvY7IGC3zWHIPcNb3E+U3UpM ysY541jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl2K-00EUjl-ST; Fri, 24 Sep 2021 13:14:45 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjC-00EMKv-B0 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:59 +0000 Received: by mail-wr1-x449.google.com with SMTP id x2-20020a5d54c2000000b0015dfd2b4e34so7999161wrv.6 for ; Fri, 24 Sep 2021 05:54:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nCXG0l+R9HIObQKW2P1KEyRjnYx24qX/arXoNkRGWM8=; b=bMETGKRZDsz/QLyWmlXkyVgIPXXIPgZPSQ+bmmIPAzzvTvXeq4PAlaA1o2khsZ2dh7 hBNR5QqKDv2VXDgHc1rRqYT9wKChVUg2/OZHtAd6zlOm2g4HjZNM9TPWEO4bv4BxB6Qb g6G41+9Tk7ijHNcaP9/MxEwBz6R501wDU7EbBaGWYdJO4na0sllQa5q9OgYxpzMkfULB Kn/PDJ6p7qfvKukg7M9lfhYb/xfJstXcYLear6w70Opn6svw08NeeQRmTsvhtQSgF+0O mEr+wp8I0aGqt1vlM4lBjOLPvhgaM5zPqNPGcpptUZddXYcT+M1mGTG7Mndr5rkBZ+au XluQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nCXG0l+R9HIObQKW2P1KEyRjnYx24qX/arXoNkRGWM8=; b=lNbAhsCh0yZxw9Q38wT8PNolH6RMeefKXdsyzjRuAhcJrpIMS08x5Sq6IXRwuC23eo R2sQZYUR4uXJdkPEw6CR6eruKNFlQcpbcOEFKl8PEXf76KZMNmAGHnuFhzJPmoruY803 85+bPf1DqaldPFzsP0DEutx8WS2TqUYwCqpclq7A2HUX4NISIO+oQ+sm+YLjRIDmoKef FZemXXFZn8w4BsaEm/nNRqPsf9PdnjKub3wFJzz+CHaZ1kvsAE6J5qttDrm+bxBcZLnK rAMZPiqc1yRuLGRuULd4fbX421v9tTQ0I45l+gBUfGHr/D4v1a06jxoTZS6wO0XufYYk /fhg== X-Gm-Message-State: AOAM533TUcSxtt3nW2w6YDJOxOiypszCJW5bogkyMb+iT/YcF1Jd/o5k GWiB63WbWevqeHRz34H8CQF4cN2lLQ== X-Google-Smtp-Source: ABdhPJyLaWDPi72Md06wxlso8tc+C+HUsJOKyBd5N7LvM9mwCmnEhGULtENvFCUBKM2sh1nwCEuENi62Xw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:2184:: with SMTP id e4mr1925558wme.61.1632488096350; Fri, 24 Sep 2021 05:54:56 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:55 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-27-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055458_434761_E9B71F5A X-CRM114-Status: GOOD ( 13.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor protected VM activate_traps not to use vcpu. Protected 32 bit VMs are not supported, and therefore the code for setting the floating point traps at 32 bits isn't needed for the pvm case. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 35 +++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 9e79f97ba49e..0d654b324612 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,9 +34,10 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); -static void __activate_traps(struct kvm_vcpu *vcpu) +/* Activate traps for protected guests */ +static void __activate_traps_pvm(struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); u64 val; ___activate_traps(vcpu_hyps); @@ -44,26 +45,36 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; - if (!update_fp_enabled(vcpu)) { - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; - __activate_traps_fpsimd32(vcpu); - } write_sysreg(val, cptr_el2); write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; - isb(); /* * At this stage, and thanks to the above isb(), S2 is * configured and enabled. We can now restore the guest's S1 * configuration: SCTLR, and only then TCR. */ - write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, SCTLR_EL1), SYS_SCTLR); isb(); - write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, TCR_EL1), SYS_TCR); + } +} + +/* Activate traps for non-protected guests in nVHE */ +static void __activate_traps_nvhe(struct kvm_vcpu *vcpu) +{ + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); + struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; + + __activate_traps_pvm(vcpu_ctxt, vcpu_hyps); + + if (!update_fp_enabled(vcpu)) { + u64 val = CPTR_EL2_DEFAULT | CPTR_EL2_TTA | CPTR_EL2_TAM | + CPTR_EL2_TFP | CPTR_EL2_TZ; + __activate_traps_fpsimd32(vcpu); + write_sysreg(val, cptr_el2); } } @@ -219,7 +230,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); - __activate_traps(vcpu); + __activate_traps_nvhe(vcpu); __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); @@ -321,7 +332,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); - __activate_traps(vcpu); + __activate_traps_pvm(vcpu_ctxt, vcpu_hyps); __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); From patchwork Fri Sep 24 12:53:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B491C433F5 for ; Fri, 24 Sep 2021 13:18:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B4FEE60F41 for ; Fri, 24 Sep 2021 13:18:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B4FEE60F41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fpyz7JZcf4J/8z8ap6dRjcpujfYyb8EisZxXwqNirVE=; b=BVtN/Za5Rw98dGtciDUT8Q86qP X9Qhni06aGFKBaKC8R9fP4fjN3RHwkh+zAOCvfba8ljGS+w0XqXZf7lKvOO+1dWqgWUFSPA+FHbq+ dGJ7nQpEAMYEuNymUbCyeZKF8lSzblp4KRD4MIQ1fSgv0AcvkvN26OqqbbKpRlw/hdlVBA8RqFnfX hUm2aHVwoY9CeTVkQYeebc+6QwT+m3wFmfGPTdHK3B1uadP9nWG7E76HB8ZDYuhQLLHDoti8iTOY4 XXNP08iyRJx+XDn0UjxSDzQtwSIhNi7wHOxZtEIW3C3C2aCelQoC+MOXTj/qHd8629qDfz8ogZnqB A9HLLx1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl38-00EV1B-SC; Fri, 24 Sep 2021 13:15:35 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjE-00EMLm-NW for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:55:02 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 81-20020a251254000000b005b6220d81efso1776346ybs.12 for ; Fri, 24 Sep 2021 05:54:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6pASbSsT/Yef+KNtWRsM7REwBs8GiQAxHWrM4DAwh1s=; b=oYStHblXmyttk5JHb7mn/QeeOuyrSd9jwYFYzrB6dav9dTynlqaP3PEkx4wfPblOZ8 xp41B097EPMoLP9xWck4D48TFdIQCmqPXQLfPXAMJv8vr0BPDf/h6yIlGb/6jibiVK96 U1S32OOmVXqWVIa5Ays6LyLJ9/5Su8IRzQJak9EaCdedq4ixS1rUxKrxoxBsMGgaEK38 r4xLPHw/Dijs2WEajb/iV+BetQ3xx/Fl+sTIBpcnuchQrWIk9VilcecF9ysdUfiiwQh9 uKgtpvVv4hBbZp9kgr+K1J/uKpBrB+7OEQFPv5EPVyEmdR8t9M/6KEfoN54/27SEYeAi 7KZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6pASbSsT/Yef+KNtWRsM7REwBs8GiQAxHWrM4DAwh1s=; b=2v9szClrVSrk2f6HTlMHmmt3ESqHcjGFVZg/WMzVxzQPnVEbDE9t0s6BPoKwCueZ19 TCBljkF90UWjxfh/acMpZgxp7daCLcm4W+rSrS4EGW4hrgiKt9TwKsG+ujS3kYr2TxMo 7FPOCoDaXZa745tsfY/5CaKyOZ4WKowP/w0mym0JOSdXtZ4Rd2cOavEZh/oEyiYiFNLC EntHxizaUOx/Bz/r472iwKRuoqAtHkOkwevI5QmnIv6zDNO5uO2m6DUb7DeN9cvTlYai xn9hxRPzqqI/mAS7fXfAoaGC6Ts7KCtOorVvUgbC/ckupTJuQqwNI3+pr2WWVMslzTgl 7dgQ== X-Gm-Message-State: AOAM530YqgD/fJ5LjUW6B8DH8RYi7+NcTCKc9nimWitzSSWdct0Ndhb0 9UPhU4D9aLcrVucUfYC2/3ae6qKfMQ== X-Google-Smtp-Source: ABdhPJz5QSOUNSvYae9zYQoRjlhXT3uab0VnL6rjHZdQg2Hs0A2XNNOhm8ErqHE0DojL6wYHc3wByV8RRw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a25:bdc5:: with SMTP id g5mr12073562ybk.403.1632488098448; Fri, 24 Sep 2021 05:54:58 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:56 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-28-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055500_807496_5BD51965 X-CRM114-Status: GOOD ( 18.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Remove code for unsupported features for protected VMs from __kvm_vcpu_run_pvm(). Do not run unsupported code (SVE) in __hyp_handle_fpsimd(). Enforcement of this is in the fixed features patch series [1]. The code removed or disabled is related to the following: - PMU - Debug - Arm32 - SPE - SVE [1] Link: https://lore.kernel.org/kvmarm/20210922124704.600087-1-tabba@google.com/T/#u Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 5 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 36 ------------------------- 2 files changed, 3 insertions(+), 38 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 433601f79b94..3ef429cfd9af 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -232,6 +232,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + const bool is_protected = is_nvhe_hyp_code() && kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)); bool sve_guest, sve_host; u8 esr_ec; u64 reg; @@ -239,7 +240,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (!system_supports_fpsimd()) return false; - if (system_supports_sve()) { + if (system_supports_sve() && !is_protected) { sve_guest = hyp_state_has_sve(vcpu_hyps); sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE; } else { @@ -247,7 +248,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) sve_host = false; } - esr_ec = kvm_vcpu_trap_get_class(vcpu); + esr_ec = kvm_hyp_state_trap_get_class(vcpu_hyps); if (esr_ec != ESR_ELx_EC_FP_ASIMD && esr_ec != ESR_ELx_EC_SVE) return false; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 0d654b324612..aa0dc4f0433b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -288,7 +288,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; - bool pmu_switch_needed; u64 exit_code; /* @@ -306,29 +305,10 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); - __sysreg_save_state_nvhe(host_ctxt); - /* - * We must flush and disable the SPE buffer for nVHE, as - * the translation regime(EL1&0) is going to be loaded with - * that of the guest. And we must do this before we change the - * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and - * before we load guest Stage1. - */ - __debug_save_host_buffers_nvhe(vcpu); kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - * - * Also, and in order to be able to deal with erratum #1319537 (A57) - * and #1319367 (A72), we must ensure that all VM-related sysreg are - * restored before we enable S2 translation. - */ - __sysreg32_restore_state(vcpu); __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); @@ -337,8 +317,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); - __debug_switch_to_guest(vcpu); - do { struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); set_hyp_running_vcpu(hyp_ctxt, vcpu); @@ -350,7 +328,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) } while (fixup_guest_exit(vcpu, vgic, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); - __sysreg32_save_state(vcpu); __timer_disable_traps(); __hyp_vgic_save_state(vcpu); @@ -359,19 +336,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(host_ctxt); - if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) - __fpsimd_save_fpexc32(vcpu); - - __debug_switch_to_host(vcpu); - /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. - */ - __debug_restore_host_buffers_nvhe(vcpu); - - if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); - /* Returning to host will clear PSR.I, remask PMR if needed */ if (system_uses_irq_prio_masking()) gic_write_pmr(GIC_PRIO_IRQOFF); From patchwork Fri Sep 24 12:53:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0851CC433F5 for ; Fri, 24 Sep 2021 13:18:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C30AD60F41 for ; Fri, 24 Sep 2021 13:18:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C30AD60F41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=V+5oBraqVcqBCH6vazPZe2IWbWUbrjoMfFi47QfW80A=; b=pQIO8oRuK649cEtc5K6MJPF9pi OhI/CDPF5MNxS3yqRygFDQX63YHaFG+ZPJ89rfuCfFH622wVMhLyeMfcSMjxMV6h9r8ET84ZN6ntT st+JvcSPwHbyjUh+pz3rLm8aCIjLmG9wn707+TBEmLk1pmYNXzkut0joSovV/RAXjnrSw/sZ7nyDi cbh0ovrwtL7v2tG2RGzzsw0jSFki6kJw6xREZIf+5+C9njEIV99f3Dy1ds7U8NIMs8YCapIX/LIUp KvryaFkqVpvCStBlC/gVK8C4NNlAA2ocBbEpW60i178k2y/q5W5e4sl3Qi6rTqjBTMA7ytZOTlWhc YsZ40Opg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl46-00EVLk-Hk; Fri, 24 Sep 2021 13:16:35 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjF-00EMMH-NN for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:55:03 +0000 Received: by mail-qt1-x849.google.com with SMTP id a22-20020ac86116000000b002a1463f30ddso28344252qtm.17 for ; Fri, 24 Sep 2021 05:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=y44fzIX1ftLMcQsIMexqk7MlH9WcFM4Uf4DFLHOV8zw=; b=kbcSGdX6W1DTAU32KsoIRyu5RXf3fqm9Ft5iyr3m/VyqUU0YPxHG/sYle+VkB16whN 9piwWyrAzf3Hxu30Th595VibMFLO2rpVfcxRv8/4Vr3TxfZOOinM57oyVhs3Lyv3EEw9 3CD8zN0/CXFY+HOMRwnQZFvaTz/vPVp1e2YFjT8nonSduXGwPCwDIb6KxRjLfSKyrEtU rm8/pwdZ5NhnzCLYTqiDn9sxDW66NgWRbLMVtkQ6y1APguvqZIitvKc9pajmjDzx7G2S D3fDEfhzx/5Av6NBQz11mDPmB0KHsuaTGp3D1kawfgMWZ5D6+PDgqcBdRjHtYliJMgMP flYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=y44fzIX1ftLMcQsIMexqk7MlH9WcFM4Uf4DFLHOV8zw=; b=Zal1H0gLs/QUnBMUmu/S7cQ0uEYzoMsLEMv6KNacvsVhCs8EbgcmWD8zS6oVcREdx7 xHEdn9pz2DuwL0hGB1ZnQ9NvbqPeVpsayl3pKKW8eRIEX6mWdDOr9KjwYCQp8HkBddOl NpOj8VB2jTrC4xj8Q9+B86fBVfVYlrDAaeS0FIQNfpLdoIcQC1efJQfuepbpCP6KepdD 9B882BOfeNFwRY7SSByQFLYwfFnvAX/wQl5u/5WuRFwgI5CJ4SdeY4Qu9Kxqw3G6vnqx 6miO9vLiihmtgwvUKcK0mS13W3sA2mT/rJLhJA3mWgfY4Dopfn/0hcC8GNdA5wYR1eqX yXmw== X-Gm-Message-State: AOAM531B8bqrw2/jqqtwxthpgG3BKE07TKR2Xe7dy2fQrPhOEVzV319q DQ3OCPPtNe5OPJSGrayL0fSPhgOXrg== X-Google-Smtp-Source: ABdhPJwzDg459e5SoIW1gM5OBltgwgX9FKJl95y+IxUYvre8ucu6ik03AQSvZPZnEMDcAv3LYlxmQdXBGA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:2e4:: with SMTP id h4mr9596859qvu.3.1632488100413; Fri, 24 Sep 2021 05:55:00 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:57 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-29-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055501_856601_3D5D5925 X-CRM114-Status: GOOD ( 13.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Reduce the scope of fixup_guest_exit for protected VMs to only need hyp_state and kvm_cpu_ctxt Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 23 +++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++----- arch/arm64/kvm/hyp/vhe/switch.c | 3 +-- 3 files changed, 22 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 3ef429cfd9af..ea9571f712c6 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -423,11 +423,8 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code) +static inline bool _fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps, u64 *exit_code) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); @@ -518,6 +515,24 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi return true; } +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; + struct vcpu_hyp_state *hyps = &vcpu->arch.hyp_state; + // TODO: create helper for getting VA + struct kvm *kvm = vcpu->kvm; + + if (is_nvhe_hyp_code()) + kvm = kern_hyp_va(kvm); + + return _fixup_guest_exit(vcpu, &kvm->arch.vgic, ctxt, hyps, exit_code); +} + +static inline bool fixup_pvm_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps, u64 *exit_code) +{ + return _fixup_guest_exit(vcpu, vgic, ctxt, hyps, exit_code); +} + static inline void __kvm_unexpected_el2_exception(void) { extern char __guest_exit_panic[]; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index aa0dc4f0433b..1920aebbe49a 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -182,8 +182,6 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state; struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -245,7 +243,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_guest_exit(vcpu, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); @@ -285,7 +283,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -325,7 +322,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_pvm_guest_exit(vcpu, &kvm->arch.vgic, vcpu_ctxt, vcpu_hyps, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 7f926016cebe..4a05aff37325 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -110,7 +110,6 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -148,7 +147,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_guest_exit(vcpu, &exit_code)); sysreg_save_guest_state_vhe(guest_ctxt); From patchwork Fri Sep 24 12:53:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB633C433F5 for ; Fri, 24 Sep 2021 13:19:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8395760F41 for ; Fri, 24 Sep 2021 13:19:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8395760F41 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IgpITFYrtK/J6tiWQoFOgCZLGIXq/5NMPXiZRD+JiHQ=; b=NIhdmBegsJcA7ZxUGPOq2JJh61 3T1bXmGqoMQRWKsd5nRjCDGqQv9Qn46tV1kvXzzeBN2zyik3WuX23FLN6e7rEG4jMGP3YfSH8KrTN 41ha+qP5unggT32XbNtCy53jlUQp/BmJG5+3RHkMX1XRNNUcs47zxU5t9aPTHwH7QktPmYAcwd7po UTZ3ltSotyad8Ab7EURKzgDTVElWnmu+fFrhayJ39nbzTJp+ESMKIre+HwhTfZDB9PB0d2Rc7peW1 RVxgoyN761VZkAQ13zTDMYGrWoQd5pkwjyuQlWeFNtJIIWXi6DUMwmkT+U+RJ3f9a7iaEixcj3/fc u5A0HW4Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl50-00EViY-W0; Fri, 24 Sep 2021 13:17:32 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjI-00EMNT-C8 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:55:07 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r9-20020a5d4989000000b0015d0fbb8823so7960544wrq.18 for ; Fri, 24 Sep 2021 05:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=Y3zYm0SRj3FD4wkWCBfd29+CqKAr1s9F0wHk7odJZK4=; b=iha4WANdOO0Eeb6o9Nbr9dDay/dtYchT70gSTmdOATbdUDAzq45HJHpryZLf+bWtXw WwEc8l1bqn9aL44QsgyAjzH23MsnYLNjg1otAMr7deyT0zFQa5P7u6AS4p6eBj6u+yBI SW2tCnP9/o7m/WQ9gblP7hQDvgKZgGqMZxYfar0TcYuuwqeN8DmGhjLY+KF/3GFX5W2o EKvHBUErTCE4G+wEEVTzyldkamEFaHJ6v4i3fZ9cJQX197rBjECmA848mVnsdr607xfP Nmq0WzF7uujiC1kaoNLxxSZChlP+9Ph6Wwc6dh6jeD+6B/rXWXu6Mi9jMWWIP0JOVMo5 JZGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=Y3zYm0SRj3FD4wkWCBfd29+CqKAr1s9F0wHk7odJZK4=; b=H3VDgZq22uRtFCk6ZjSMXcSsat8OoHHJ54yXsjmR3yjPUNtLyo11G9QPqsj7BSWeK8 M+dp8GT4glWwK4rdFrUAVQ2jBXovKKFKviByfgJ+ksLlcqGHUiMAOh5j+VIvTnm8rIGk MFadQwzCiQjMLkMS//juluLDMaBkiQ6I8Gkd+t43ItKsnl/g7hGvXzeblkosHf5iqnCh I2TNV6vP8GTBRB9LQpAoaLXPeMqx6ujUs3vDIU3DoWiDMRGhTRBnq8bvQAzSYsSvsWlg 5QY7O3zLZzrRkLAktylOozv4bsxaLoNnYm7arWCaxn67kkiiZ0YQl2CBbMoH1rixin5S axRQ== X-Gm-Message-State: AOAM532XJaVAHQeeoKf5H/9OWh/OlHmUfWWKWhyoHEo6WGnMNcZndegD 0/7qBL9O5bsXjvBo7nyxERIj1R1Wfw== X-Google-Smtp-Source: ABdhPJxfp36BOamuqS8wiRovtsjqG9I1ct8dLdKOW3rX3k86iF2BuhU/75LwpcAa4cCebcJScLWdvThjLw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:adf:b652:: with SMTP id i18mr11136561wre.117.1632488102393; Fri, 24 Sep 2021 05:55:02 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:58 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-30-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055504_480653_7355683D X-CRM114-Status: GOOD ( 12.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The scripts are not needed anymore, and were included for the git history. Signed-off-by: Fuad Tabba --- cocci_refactor/add_ctxt.cocci | 169 ------------------------ cocci_refactor/add_hypstate.cocci | 125 ------------------ cocci_refactor/hyp_ctxt.cocci | 38 ------ cocci_refactor/range.cocci | 50 ------- cocci_refactor/remove_unused.cocci | 69 ---------- cocci_refactor/test.cocci | 20 --- cocci_refactor/use_ctxt.cocci | 32 ----- cocci_refactor/use_ctxt_access.cocci | 39 ------ cocci_refactor/use_hypstate.cocci | 63 --------- cocci_refactor/vcpu_arch_ctxt.cocci | 13 -- cocci_refactor/vcpu_declr.cocci | 59 --------- cocci_refactor/vcpu_flags.cocci | 10 -- cocci_refactor/vcpu_hyp_accessors.cocci | 35 ----- cocci_refactor/vcpu_hyp_state.cocci | 30 ----- cocci_refactor/vgic3_cpu.cocci | 118 ----------------- 15 files changed, 870 deletions(-) delete mode 100644 cocci_refactor/add_ctxt.cocci delete mode 100644 cocci_refactor/add_hypstate.cocci delete mode 100644 cocci_refactor/hyp_ctxt.cocci delete mode 100644 cocci_refactor/range.cocci delete mode 100644 cocci_refactor/remove_unused.cocci delete mode 100644 cocci_refactor/test.cocci delete mode 100644 cocci_refactor/use_ctxt.cocci delete mode 100644 cocci_refactor/use_ctxt_access.cocci delete mode 100644 cocci_refactor/use_hypstate.cocci delete mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci delete mode 100644 cocci_refactor/vcpu_declr.cocci delete mode 100644 cocci_refactor/vcpu_flags.cocci delete mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci delete mode 100644 cocci_refactor/vcpu_hyp_state.cocci delete mode 100644 cocci_refactor/vgic3_cpu.cocci diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci deleted file mode 100644 index 203644944ace..000000000000 --- a/cocci_refactor/add_ctxt.cocci +++ /dev/null @@ -1,169 +0,0 @@ -// - -/* -spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place -*/ - - -@exists@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier fc; -@@ -<... -( - struct kvm_vcpu *vcpu = NULL; -+ struct kvm_cpu_context *vcpu_ctxt; -| - struct kvm_vcpu *vcpu = ...; -+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -| - struct kvm_vcpu *vcpu; -+ struct kvm_cpu_context *vcpu_ctxt; -) -<... - vcpu = ...; -+ vcpu_ctxt = &vcpu_ctxt(vcpu); -...> -fc(..., vcpu, ...) -...> - -@exists@ -identifier func != {kvm_arch_vcpu_run_pid_change}; -identifier fc != {vcpu_ctxt}; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -func(..., struct kvm_vcpu *vcpu, ...) { -+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -expression a, b; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -iterator name kvm_for_each_vcpu; -identifier fc; -@@ -kvm_for_each_vcpu(a, vcpu, b) - { -+ vcpu_ctxt = &vcpu_ctxt(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -identifier vcpu_ctxt, vcpu; -iterator name kvm_for_each_vcpu; -type T; -identifier x; -statement S1, S2; -@@ -kvm_for_each_vcpu(...) - { -- vcpu_ctxt = &vcpu_ctxt(vcpu); -... when != S1 -+ vcpu_ctxt = &vcpu_ctxt(vcpu); - S2 - ... when any - } - -@ -disable optional_qualifier -exists -@ -identifier vcpu; -identifier vcpu_ctxt; -@@ -<... - const struct kvm_vcpu *vcpu = ...; -- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -...> - -@disable optional_qualifier@ -identifier func, vcpu; -identifier vcpu_ctxt; -@@ -func(..., const struct kvm_vcpu *vcpu, ...) { -- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -... - } - -@exists@ -expression r1, r2; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -( -- vcpu_gp_regs(vcpu) -+ ctxt_gp_regs(vcpu_ctxt) -| -- vcpu_spsr_abt(vcpu) -+ ctxt_spsr_abt(vcpu_ctxt) -| -- vcpu_spsr_und(vcpu) -+ ctxt_spsr_und(vcpu_ctxt) -| -- vcpu_spsr_irq(vcpu) -+ ctxt_spsr_irq(vcpu_ctxt) -| -- vcpu_spsr_fiq(vcpu) -+ ctxt_spsr_fiq(vcpu_ctxt) -| -- vcpu_fp_regs(vcpu) -+ ctxt_fp_regs(vcpu_ctxt) -| -- __vcpu_sys_reg(vcpu, r1) -+ ctxt_sys_reg(vcpu_ctxt, r1) -| -- __vcpu_read_sys_reg(vcpu, r1) -+ __ctxt_read_sys_reg(vcpu_ctxt, r1) -| -- __vcpu_write_sys_reg(vcpu, r1, r2) -+ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2) -| -- __vcpu_write_spsr(vcpu, r1) -+ __ctxt_write_spsr(vcpu_ctxt, r1) -| -- __vcpu_write_spsr_abt(vcpu, r1) -+ __ctxt_write_spsr_abt(vcpu_ctxt, r1) -| -- __vcpu_write_spsr_und(vcpu, r1) -+ __ctxt_write_spsr_und(vcpu_ctxt, r1) -| -- vcpu_pc(vcpu) -+ ctxt_pc(vcpu_ctxt) -| -- vcpu_cpsr(vcpu) -+ ctxt_cpsr(vcpu_ctxt) -| -- vcpu_mode_is_32bit(vcpu) -+ ctxt_mode_is_32bit(vcpu_ctxt) -| -- vcpu_set_thumb(vcpu) -+ ctxt_set_thumb(vcpu_ctxt) -| -- vcpu_get_reg(vcpu, r1) -+ ctxt_get_reg(vcpu_ctxt, r1) -| -- vcpu_set_reg(vcpu, r1, r2) -+ ctxt_set_reg(vcpu_ctxt, r1, r2) -) - - -/* Handles one case of a call within a call. */ -@@ -expression r1, r2; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -- vcpu_pc(vcpu) -+ ctxt_pc(vcpu_ctxt) - -// diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci deleted file mode 100644 index e8635d0e8f57..000000000000 --- a/cocci_refactor/add_hypstate.cocci +++ /dev/null @@ -1,125 +0,0 @@ -// - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" -spatch --sp-file add_hypstate.cocci $FILES --in-place -*/ - -@exists@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier fc; -@@ -<... -( - struct kvm_vcpu *vcpu = NULL; -+ struct vcpu_hyp_state *hyps; -| - struct kvm_vcpu *vcpu = ...; -+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -| - struct kvm_vcpu *vcpu; -+ struct vcpu_hyp_state *hyps; -) -<... - vcpu = ...; -+ hyps = &hyp_state(vcpu); -...> -fc(..., vcpu, ...) -...> - -@exists@ -identifier func != {kvm_arch_vcpu_run_pid_change}; -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier fc; -@@ -func(..., struct kvm_vcpu *vcpu, ...) { -+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -expression a, b; -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -iterator name kvm_for_each_vcpu; -identifier fc; -@@ -kvm_for_each_vcpu(a, vcpu, b) - { -+ hyps = &hyp_state(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -identifier hyps, vcpu; -iterator name kvm_for_each_vcpu; -statement S1, S2; -@@ -kvm_for_each_vcpu(...) - { -- hyps = &hyp_state(vcpu); -... when != S1 -+ hyps = &hyp_state(vcpu); - S2 - ... when any - } - -@ -disable optional_qualifier -exists -@ -identifier vcpu, hyps; -@@ -<... - const struct kvm_vcpu *vcpu = ...; -- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -...> - - -@@ -identifier func, vcpu, hyps; -@@ -func(..., const struct kvm_vcpu *vcpu, ...) { -- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -... - } - -@exists@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -@@ -( -- vcpu_hcr_el2(vcpu) -+ hyp_state_hcr_el2(hyps) -| -- vcpu_mdcr_el2(vcpu) -+ hyp_state_mdcr_el2(hyps) -| -- vcpu_vsesr_el2(vcpu) -+ hyp_state_vsesr_el2(hyps) -| -- vcpu_fault(vcpu) -+ hyp_state_fault(hyps) -| -- vcpu_flags(vcpu) -+ hyp_state_flags(hyps) -| -- vcpu_has_sve(vcpu) -+ hyp_state_has_sve(hyps) -| -- vcpu_has_ptrauth(vcpu) -+ hyp_state_has_ptrauth(hyps) -| -- kvm_arm_vcpu_sve_finalized(vcpu) -+ kvm_arm_hyp_state_sve_finalized(hyps) -) - -// diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci deleted file mode 100644 index af7974e3a502..000000000000 --- a/cocci_refactor/hyp_ctxt.cocci +++ /dev/null @@ -1,38 +0,0 @@ -// Remove vcpu if all we're using is hypstate and ctxt - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")" -spatch --sp-file hyp_ctxt.cocci $FILES --in-place; -*/ - -// - -@remove@ -identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_"; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -@@ -func(..., -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps -,...) { -?- struct vcpu_hyp_state *hyps_remove = ...; -?- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - } - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier remove.func; -@@ - func( -- vcpu -+ vcpu_ctxt, vcpu_hyps - , ...) - -// \ No newline at end of file diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci deleted file mode 100644 index d99b9ee30657..000000000000 --- a/cocci_refactor/range.cocci +++ /dev/null @@ -1,50 +0,0 @@ - - -// - -/* - FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES -*/ - -@initialize:python@ -@@ -starts = ("start", "begin", "from", "floor", "addr", "kaddr") -ends = ("size", "length", "len") - -//ends = ("end", "to", "ceiling", "size", "length", "len") - - -@start_end@ -identifier f; -type A, B; -identifier start, end; -parameter list[n] ps; -@@ -f(ps, A start, B end, ...) { -... -} - -@script:python@ -start << start_end.start; -end << start_end.end; -ta << start_end.A; -tb << start_end.B; -@@ - -if ta != tb and tb != "size_t": - cocci.include_match(False) -elif not any(x in start for x in starts) and not any(x in end for x in ends): - cocci.include_match(False) - -@@ -identifier f = start_end.f; -expression list[start_end.n] xs; -expression a, b; -@@ -( -* f(xs, a, a, ...) -| -* f(xs, a, a - b, ...) -) - -// \ No newline at end of file diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci deleted file mode 100644 index c06278398198..000000000000 --- a/cocci_refactor/remove_unused.cocci +++ /dev/null @@ -1,69 +0,0 @@ -// - -/* -spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff -*/ - -@@ -identifier hyps; -@@ -{ -... -( -- struct vcpu_hyp_state *hyps = ...; -| -- struct vcpu_hyp_state *hyps; -) -... when != hyps - when != if (...) { <+...hyps...+> } -?- hyps = ...; -... when != hyps - when != if (...) { <+...hyps...+> } -} - -@@ -identifier vcpu_ctxt; -@@ -{ -... -( -- struct kvm_cpu_context *vcpu_ctxt = ...; -| -- struct kvm_cpu_context *vcpu_ctxt; -) -... when != vcpu_ctxt - when != if (...) { <+...vcpu_ctxt...+> } -?- vcpu_ctxt = ...; -... when != vcpu_ctxt - when != if (...) { <+...vcpu_ctxt...+> } -} - -@@ -identifier x; -identifier func; -statement S; -@@ -func(...) - { -... -struct kvm_cpu_context *x = ...; -+ -S -... - } - -@@ -identifier x; -identifier func; -statement S; -@@ -func(...) - { -... -struct vcpu_hyp_state *x = ...; -+ -S -... - } - -// diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci deleted file mode 100644 index 5eb685240ce7..000000000000 --- a/cocci_refactor/test.cocci +++ /dev/null @@ -1,20 +0,0 @@ -/* - FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES - -*/ - -@r@ -identifier fn; -@@ -fn(...) { - hello; - ... -} - -@@ -identifier r.fn; -@@ -static fn(...) { -+ world; - ... -} diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci deleted file mode 100644 index f3f961f567fd..000000000000 --- a/cocci_refactor/use_ctxt.cocci +++ /dev/null @@ -1,32 +0,0 @@ -// -/* -spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place -spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place -*/ - -@remove_vcpu@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier ctxt_remove; -identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)"; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt -, ...) { -- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } -} - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier func = remove_vcpu.func; -@@ -func( -- vcpu -+ vcpu_ctxt - , ...) - -// diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci deleted file mode 100644 index 74f94141e662..000000000000 --- a/cocci_refactor/use_ctxt_access.cocci +++ /dev/null @@ -1,39 +0,0 @@ -// - -/* -spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place -*/ - -@@ -constant r; -@@ -- __ctxt_sys_reg(&vcpu->arch.ctxt, r) -+ &__vcpu_sys_reg(vcpu, r) - -@@ -identifier r; -@@ -- vcpu->arch.ctxt.regs.r -+ vcpu_gp_regs(vcpu)->r - -@@ -identifier r; -@@ -- vcpu->arch.ctxt.fp_regs.r -+ vcpu_fp_regs(vcpu)->r - -@@ -identifier r; -fresh identifier accessor = "vcpu_" ## r; -@@ -- &vcpu->arch.ctxt.r -+ accessor(vcpu) - -@@ -identifier r; -fresh identifier accessor = "vcpu_" ## r; -@@ -- vcpu->arch.ctxt.r -+ *accessor(vcpu) - -// \ No newline at end of file diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci deleted file mode 100644 index f685149de748..000000000000 --- a/cocci_refactor/use_hypstate.cocci +++ /dev/null @@ -1,63 +0,0 @@ -// - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" -spatch --sp-file use_hypstate.cocci $FILES --in-place -*/ - - -@remove_vcpu_hyps@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier func; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct vcpu_hyp_state *hyps -, ...) { -- struct vcpu_hyp_state *hyps_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } -} - -@@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier func = remove_vcpu_hyps.func; -@@ -func( -- vcpu -+ hyps - , ...) - -@remove_vcpu_hyps_ctxt@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -identifier func; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct vcpu_hyp_state *hyps -, ...) { -- struct vcpu_hyp_state *hyps_remove = ...; -- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } - when != ctxt_remove - when != if (...) { <+...ctxt_remove...+> } -} - -@@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier func = remove_vcpu_hyps_ctxt.func; -@@ -func( -- vcpu -+ hyps - , ...) - -// diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci deleted file mode 100644 index 69b3a000de4e..000000000000 --- a/cocci_refactor/vcpu_arch_ctxt.cocci +++ /dev/null @@ -1,13 +0,0 @@ -// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers --dir arch/arm64 - -// -@@ -identifier vcpu; -@@ -( -- vcpu->arch.ctxt.regs -+ vcpu_gp_regs(vcpu) -| -- vcpu->arch.ctxt.fp_regs -+ vcpu_fp_regs(vcpu) -) diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci deleted file mode 100644 index 59cd46bd6b2d..000000000000 --- a/cocci_refactor/vcpu_declr.cocci +++ /dev/null @@ -1,59 +0,0 @@ - -/* -FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file vcpu_declr.cocci $FILES --in-place -*/ - -// - -@@ -identifier vcpu; -expression E; -@@ -<... -- struct kvm_vcpu *vcpu; -+ struct kvm_vcpu *vcpu = E; - -- vcpu = E; -...> - - -/* -@@ -identifier vcpu; -identifier f1, f2; -@@ -f1(...) -{ -- struct kvm_vcpu *vcpu = NULL; -+ struct kvm_vcpu *vcpu; -... when != f2(..., vcpu, ...) -} -*/ - -/* -@find_after@ -identifier vcpu; -position p; -identifier f; -@@ -<... - struct kvm_vcpu *vcpu@p; - ... when != vcpu = ...; - f(..., vcpu, ...); -...> - -@@ -identifier vcpu; -expression E; -position p != find_after.p; -@@ -<... -- struct kvm_vcpu *vcpu@p; -+ struct kvm_vcpu *vcpu = E; - ... -- vcpu = E; -...> - -*/ - -// diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci deleted file mode 100644 index 609bb7bd7bd0..000000000000 --- a/cocci_refactor/vcpu_flags.cocci +++ /dev/null @@ -1,10 +0,0 @@ -// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers --dir arch/arm64 - -// -@@ -expression vcpu; -@@ - -- vcpu->arch.flags -+ vcpu_flags(vcpu) -// \ No newline at end of file diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci deleted file mode 100644 index 506b56f7216f..000000000000 --- a/cocci_refactor/vcpu_hyp_accessors.cocci +++ /dev/null @@ -1,35 +0,0 @@ -// - -/* -spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place -*/ - -@find_defines@ -identifier macro; -identifier vcpu; -position p; -@@ -#define macro(vcpu) vcpu@p - -@@ -identifier vcpu; -position p != find_defines.p; -@@ -( -- vcpu@p->arch.hcr_el2 -+ vcpu_hcr_el2(vcpu) -| -- vcpu@p->arch.mdcr_el2 -+ vcpu_mdcr_el2(vcpu) -| -- vcpu@p->arch.vsesr_el2 -+ vcpu_vsesr_el2(vcpu) -| -- vcpu@p->arch.fault -+ vcpu_fault(vcpu) -| -- vcpu@p->arch.flags -+ vcpu_flags(vcpu) -) - -// diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci deleted file mode 100644 index 3005a6f11871..000000000000 --- a/cocci_refactor/vcpu_hyp_state.cocci +++ /dev/null @@ -1,30 +0,0 @@ -// - -// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers --dir arch/arm64 --very-quiet --in-place - -@@ -expression vcpu; -@@ -- vcpu->arch. -+ vcpu->arch.hyp_state. -( - hcr_el2 -| - mdcr_el2 -| - vsesr_el2 -| - fault -| - flags -| - sysregs_loaded_on_cpu -) - -@@ -identifier arch; -@@ -- arch.fault -+ arch.hyp_state.fault - -// \ No newline at end of file diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci deleted file mode 100644 index f7495b2e49cb..000000000000 --- a/cocci_refactor/vgic3_cpu.cocci +++ /dev/null @@ -1,118 +0,0 @@ -// - -/* -spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place -*/ - - -@@ -identifier vcpu; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -@@ -( -- kvm_vcpu_sys_get_rt -+ kvm_hyp_state_sys_get_rt -| -- kvm_vcpu_get_esr -+ kvm_hyp_state_get_esr -) -- (vcpu) -+ (vcpu_hyps) - -@add_cpu_if@ -identifier func; -identifier c; -@@ -int func( -- struct kvm_vcpu *vcpu -+ struct vgic_v3_cpu_if *cpu_if - , ...) -{ -<+... -- vcpu->arch.vgic_cpu.vgic_v3.c -+ cpu_if->c -...+> -} - -@@ -identifier func = add_cpu_if.func; -@@ - func( -- vcpu -+ cpu_if - , ... - ) - - -@add_vgic_ctxt_hyps@ -identifier func; -@@ -void func( -- struct kvm_vcpu *vcpu -+ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps - , ...) { -?- struct vcpu_hyp_state *vcpu_hyps = ...; -?- struct kvm_cpu_context *vcpu_ctxt = ...; - ... - } - -@@ -identifier func = add_vgic_ctxt_hyps.func; -@@ - func( -- vcpu, -+ cpu_if, vcpu_ctxt, vcpu_hyps, - ... - ) - - -@find_calls@ -identifier fn; -type a, b; -@@ -- void (*fn)(struct kvm_vcpu *, a, b); -+ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b); - -@@ -identifier fn = find_calls.fn; -identifier a, b; -@@ -- fn(vcpu, a, b); -+ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b); - -@@ -@@ -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { -+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; -... -} - -@remove@ -identifier func; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -@@ -func(..., -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps -,...) { -?- struct vcpu_hyp_state *hyps_remove = ...; -?- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - } - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier remove.func; -@@ - func( -- vcpu -+ vcpu_ctxt, vcpu_hyps - , ...) - -// From patchwork Fri Sep 24 12:53:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B189C433FE for ; Fri, 24 Sep 2021 13:20:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D62061019 for ; Fri, 24 Sep 2021 13:20:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5D62061019 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XLmxC9AmHi4gHI+Mo9dsipChtiANwzZIJMC3PX7hlUs=; b=NdjSiUTYCfWQAEZtejo1xKueui fpuNOthj1R5vIL2TJssuY9CQbx0EZY8pRTzAiEU/XAGkjayJv0139GOxSp02IAU74q/uzoGHjblcD UBn90BYF8GsVWJhluweJXrU2jSkLCFGLYGbguEA9653B4jEZmmuos3E5OIZuzg06btA6JcUQDRqJC 9TPE9E6MF+Jkf+DzhrE2r7JsoGsvRYDEPrRUr75rbDHggrGskAjdx9wRvHGsEqtNWC/P9MclkBf2w ZgHLyKMz7NFHYTe0TIRBDHqKTl/IBOU6zHDgrtmeBnSOl1vFkZy+8J8JoCk2UYHjiqRG8RMJ6xsYK s8I7NPWg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTl5n-00EVz5-02; Fri, 24 Sep 2021 13:18:19 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkjK-00EMOQ-DF for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:55:07 +0000 Received: by mail-wr1-x44a.google.com with SMTP id l9-20020adfc789000000b00160111fd4e8so7981864wrg.17 for ; Fri, 24 Sep 2021 05:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pSU+osLY3SNvsXyTiQYlyo1U/eLzGKvNKDW/e+/rj1k=; b=Xi9NRBx5XZozH8rB7huHuBAd6SihWjX6Jrjnf64+NQTSXskvNul2GqBxfiIf7CNmuT 0iKOKMJ44cK4ZU6Rwz1/SGVVEFbq8t5ri3zkvZ2XFXGbIfk1wobzhlceO01k5FRItErx 5m5N8vhxsF9ZucNOiJrzQxW+YlmxiRqOMNbZajajDFlgvy9PcYT3txzcp1UEEK+HymzZ mZ88QWv7zrABazvTZ2E7mEGuKfuhe+OlDRgTvEyth3hWEwU/z3vdsXhBin4ahImA9ziu kvrQd+XmUbwiD6UW0Cszh1IwSROmmsu/e32zONjnwXCme8AzBqfD3Btll6it+ZzXJu+X bzoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pSU+osLY3SNvsXyTiQYlyo1U/eLzGKvNKDW/e+/rj1k=; b=Z98vSrroeS1zlKxEsrTbG1DhrhgFu0JNay6NqY8gMcKrINGmqNueKa32OCAr3YWidK UJk4ApBnUUgM34DjvRZoF2wkcaXZ1f/JIujI2WbqkeF3Lz+y80mR0F5cEH3nvyoqwWfR ww46G4Ye1U8uFVlhtcIz/pf6P26he7IClZ6VHzq32IXKbGeV/D1pW19JiYMzic3JJ0j9 /f0PZK5ePgOGaJsQckh86pE9zbZWRwKPWJEjmqQwgXd2T7n70mt/y6f3ra0ZsqyrFhf2 7iricQruQrt24/lSeTc5exxBYH6jhybcNV7FPbC3Ab33p6i4XexBysueIysEfciIqQ+F Gltw== X-Gm-Message-State: AOAM531K9ZiIW+2Xv7VjgTRM/EvdjjwOVwXky8+MoZQax3SmtRpvd7AL NDM1z5BT3Y+ryxMqphbBE7DTlu989Q== X-Google-Smtp-Source: ABdhPJyxAytgyRlwjG9Opamr3QJB1iEdrxd/ybpglcgq8dr0JcPPiKb6K3AtAV5QDJY8zF5YOBaDzRTtbA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:a757:: with SMTP id q84mr1960176wme.26.1632488104470; Fri, 24 Sep 2021 05:55:04 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:59 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-31-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055506_482856_70CDDC19 X-CRM114-Status: UNSURE ( 9.94 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 0278bd28bd97..ed669b2d705d 100644 --- a/Makefile +++ b/Makefile @@ -504,7 +504,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ -Werror=implicit-function-declaration -Werror=implicit-int \ -Werror=return-type -Wno-format-security \ - -std=gnu89 -Wno-unused-variable -Wno-unused-function + -std=gnu89 KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_AFLAGS_KERNEL := KBUILD_CFLAGS_KERNEL :=