From patchwork Fri Sep 24 12:53:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60E88C433F5 for ; Fri, 24 Sep 2021 13:23:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 486176103B for ; Fri, 24 Sep 2021 13:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345526AbhIXNYy (ORCPT ); Fri, 24 Sep 2021 09:24:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344913AbhIXNYs (ORCPT ); Fri, 24 Sep 2021 09:24:48 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16698C061788 for ; Fri, 24 Sep 2021 05:54:05 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id v14-20020a05620a0f0e00b0043355ed67d1so32214391qkl.7 for ; Fri, 24 Sep 2021 05:54:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+LaJv4Ut995m+xStn+CJit8kroqDB8BrAul4VeyUpto=; b=WJfhJpBhFcx1yU6q2A5dq5o7v7/qoJ76LzW3bnFH/OqCKw8Y4BBx7H1VMcJ4tZ7Q84 ycy5/fqgEuZ65y0fbocguVBTBdv8nbPd9ycSN1MZ/ZAy6kKPqAOs9K4UgY4nk42W8VDA jvZpEP6TxuYQ0tLZJycyC0xa/yoyJRa/NGylRXWjBBIUJSwBl6H90fh73Dket8vr+QbB RlBOGE1MBPLxUDMmPF6uPJF2HxT5qdB3y0sJhY2yBAnQRIjroQ1eZj05uFdc4MjgZAGX VAbsty9cYie2RSCLZF24ZKZDUxnrL0oCE+b2BWloYYD1AWXMRq+QxF3CHa+gXfriviqy qWfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+LaJv4Ut995m+xStn+CJit8kroqDB8BrAul4VeyUpto=; b=p6rAKzfnfJlJcNXDScsF7QxjIk9DsIB0//4dYr6xmYh6UXyKbiU6sqLZP7T93lYW4P VWFbYsSJtxhgj71bpI8wXYT3I4FGoC5FGoqPQ6GjKniUfLU1KkOBLq4Bxj6Z+wr9yLY8 v0e5dm1fmSzDECun6LKUxQzFKJiIlLczKufA24+BtYGtEoGhDRkAF/Jog78hd54KLJgO Xu/wqY0OZ/AJkpKVkzorHWyvR4/GypfbtUBrjPcNi5mW9Jjn3Gc0pcXVrm4ObfeZkx1S knkevvEx/C+wwxZJ48IhXRqP3+XhAYL+zuaN8+MP52gePvr2cTVgsslFJUZc6mHVR2C8 UaGA== X-Gm-Message-State: AOAM531X4ja/j+6kxOI/4knjCAXAQLcpGBahz2vCwl0OWpKvzc9YMhkf oTNiyNg/SfqwXHPpWDyhI4j3IMtHcg== X-Google-Smtp-Source: ABdhPJz8XJjf9DmrCXNeHJp2ahBQOSArt9tPtY1bK5GAiDObokO9lS6nGswDDTChpYk6XAR7Ybe+AHcL3Q== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1372:: with SMTP id c18mr9505426qvw.28.1632488043829; Fri, 24 Sep 2021 05:54:03 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:30 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-2-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 01/30] KVM: arm64: placeholder to check if VM is protected From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a function to check whether a VM is protected (under pKVM). Since the creation of protected VMs isn't enabled yet, this is a placeholder that always returns false. The intention is for this to become a check for protected VMs in the future (see Will's RFC). No functional change intended. Acked-by: Will Deacon Signed-off-by: Fuad Tabba Link: https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/ --- arch/arm64/include/asm/kvm_host.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7cd7d5c8c4bc..adb21a7f0891 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -763,6 +763,11 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); +static inline bool kvm_vm_is_protected(struct kvm *kvm) +{ + return false; +} + int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); From patchwork Fri Sep 24 12:53:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B707FC4332F for ; Fri, 24 Sep 2021 13:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D5536103B for ; Fri, 24 Sep 2021 13:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343893AbhIXNY4 (ORCPT ); Fri, 24 Sep 2021 09:24:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345943AbhIXNYt (ORCPT ); Fri, 24 Sep 2021 09:24:49 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0081C08EADE for ; Fri, 24 Sep 2021 05:54:07 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id r5-20020adfb1c5000000b0015cddb7216fso8008371wra.3 for ; Fri, 24 Sep 2021 05:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mlMa5h5NI5jeUNwNwY7Kha/NGSkd/By+lOHKgGZzb0o=; b=SWEU9RA66/WxWXSbnVeHag8LsLPsWo8oCMibTA60uonOa2NMI7kIs3aQaiEtrgWRHO DkA5OZF8LVYzQT/cr+FTPum/wkwXOpSGn6mBFh8390Hiu7bF6dK1T2mmYPopzXIWmMyg ByH+xHvE1XIot1qXxE5Yurw1SAV9R/bSIpEHsErE6bNWq+FTIeI0q3cHMP95HMCchKHy /07xpwd4F+PMcWHR7BQjwKn6vitYvJfybxPNVHcL9VT68KII8o60yJbc7fP7HXCzKda3 0SaLPyDqoC4Fh2pYJUv7rcuaRI3FlofdyqHQc/x/5vPvpDmPLw8FioGLue3WQ2c6a5dA 5IZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mlMa5h5NI5jeUNwNwY7Kha/NGSkd/By+lOHKgGZzb0o=; b=wOgcC0JNRKoG+HH9K5v6wuPHPY3oK9qVjSIAY+aJNmOoDpodxBc2sV1zwL6KeVISKa h7lQUG4oZGm3Lq/xfBwaJho1YKQNXYp0dgb/Mou+hBkR0xcPJ86yY/qMwDMVWtfdqlxX Fl279MCxTRIwkLyJNE/pN1RlhT5KP51RNl+pPF7zQIp+S2Pbdit7tR7bcLxJHCBpMROB uk3h5c93DCgGkrnqeSo43tvBj2H4qNTRH1Jw2w7IZ5TGIgRl1DU9XMVonHDeobIuzP51 AiWlsx05mqY2b1i/DyVi9jjISEuE+wtVnDGwRyOW5ik5Sv/iQX/T+84kwRUfHv7JmgJi 4arg== X-Gm-Message-State: AOAM532E0QSTRsIb/tDg6P9bR35D7Yhdkxl+DgB36brXppmgnzZLw54B U0m+uxXZUReq2QOMiyl97V43CkYAPg== X-Google-Smtp-Source: ABdhPJx6zVX5jtlMJbOyL+BbcPj3DUM9DjQf7WdSun6TSGOTKmjY7ZrgMDMtNn5ZOQ8+TC/ja/ahR0SWWQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:598f:: with SMTP id n15mr11204398wri.74.1632488046306; Fri, 24 Sep 2021 05:54:06 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:31 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-3-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 02/30] [DONOTMERGE] Temporarily disable unused variable warning From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Later patches add variables and functions that won't be used immediately. Disable the warnings until the variables are used. Signed-off-by: Fuad Tabba --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index ed669b2d705d..0278bd28bd97 100644 --- a/Makefile +++ b/Makefile @@ -504,7 +504,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ -Werror=implicit-function-declaration -Werror=implicit-int \ -Werror=return-type -Wno-format-security \ - -std=gnu89 + -std=gnu89 -Wno-unused-variable -Wno-unused-function KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_AFLAGS_KERNEL := KBUILD_CFLAGS_KERNEL := From patchwork Fri Sep 24 12:53:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 288C3C433EF for ; Fri, 24 Sep 2021 13:23:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1036C6105A for ; Fri, 24 Sep 2021 13:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345814AbhIXNY5 (ORCPT ); Fri, 24 Sep 2021 09:24:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345419AbhIXNYv (ORCPT ); Fri, 24 Sep 2021 09:24:51 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C5D2C08EAE8 for ; Fri, 24 Sep 2021 05:54:10 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id j16-20020adfa550000000b0016012acc443so7988543wrb.14 for ; Fri, 24 Sep 2021 05:54:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=W+2w1qwCuEn0t2SEv46R5dHRsrtW6N98LhGxQR4ZHMs=; b=fB29omoApOyd7KWBCUSp019JzllSe991ZbrpXIl/8+wD518FgOf68xWf1p080OQzLK 7ef+DcagPMxgJ1sICd9uZ37FbAyEV43bQtw0dqqbG/gJFzVs8A0S/Bb39XrkMckmrqxM 1xQ+TEJK1KB7xTTgmdXlNGKw2wBihF3ZOqPeIzc+8ysz20+GBCRUZot2z/oGTAuMPAoo iN4EUiT0JIIy7df/wYX3JoMf0BR7l/n4skiALWB3CGeAqMwOjDzvwWn/yw0Kt27/4qEP UjBSemQMkNmHZNIT3OiGLvTr/IT7xnhAOOJrXtzxLcGqL285afdsuBvWqD5Jq8CQSmAl 7+wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=W+2w1qwCuEn0t2SEv46R5dHRsrtW6N98LhGxQR4ZHMs=; b=cNF/OmpK1WItj3ydlxrD3xIWV2v5UkXDHDOGfbJp2Xwi4LrH5O5NDHHyPa+5NDPPTU jhBvfpZk/a7CyGy5ZAakrByMC+ibyQaej+CeX+lxNOtYNe6qlDNa8RVyMRInf3FjwXOB ynLtk7/8PTXIt+opiFdCTQPvCwXR4K71COdiA96fnf3CpSvm27SOw+Cv1VJvAKTdayUI LLBRMjG6VtqVLA7Awp0A2lHyq/b74DjsMpHgul8rd7XwDx8Z5mqtY8/bnndb4vti1jtG CkGvbMM+zxHMUWJYi7SsxhelBXwl2CaSmcztBQxkQmz+zBlwL0R6+KSDV6VjJ7Uf/2gE 4RzA== X-Gm-Message-State: AOAM533f+Lt0/a/oCdZVDOwKwNxVRBDeyfcb57GZedK446o+xl8BrCqo Cskeoyd5VzUoYK7hSGX6ytSZdJvdAw== X-Google-Smtp-Source: ABdhPJx/AhXU7v63bYO98iWD72IfKcnea82QgZwoY4OwP8dr+1tbs2DMALrsIf7vL8Tvaw9WkugTCl9mXQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:4c14:: with SMTP id z20mr1965423wmf.82.1632488048707; Fri, 24 Sep 2021 05:54:08 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:32 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-4-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 03/30] [DONOTMERGE] Coccinelle scripts for refactoring From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These Coccinelle scripts are used in the coming refactoring patches. Adding them as a commit to keep them as part of the history. For running the scripts please use a recent version of Coccinelle, which has this patch [*]. Signed-off-by: Fuad Tabba [*] Link: https://lore.kernel.org/cocci/alpine.DEB.2.22.394.2104211654020.13358@hadrien/T/#t --- cocci_refactor/add_ctxt.cocci | 169 ++++++++++++++++++++++++ cocci_refactor/add_hypstate.cocci | 125 ++++++++++++++++++ cocci_refactor/hyp_ctxt.cocci | 38 ++++++ cocci_refactor/range.cocci | 50 +++++++ cocci_refactor/remove_unused.cocci | 69 ++++++++++ cocci_refactor/test.cocci | 20 +++ cocci_refactor/use_ctxt.cocci | 32 +++++ cocci_refactor/use_ctxt_access.cocci | 39 ++++++ cocci_refactor/use_hypstate.cocci | 63 +++++++++ cocci_refactor/vcpu_arch_ctxt.cocci | 13 ++ cocci_refactor/vcpu_declr.cocci | 59 +++++++++ cocci_refactor/vcpu_flags.cocci | 10 ++ cocci_refactor/vcpu_hyp_accessors.cocci | 35 +++++ cocci_refactor/vcpu_hyp_state.cocci | 30 +++++ cocci_refactor/vgic3_cpu.cocci | 118 +++++++++++++++++ 15 files changed, 870 insertions(+) create mode 100644 cocci_refactor/add_ctxt.cocci create mode 100644 cocci_refactor/add_hypstate.cocci create mode 100644 cocci_refactor/hyp_ctxt.cocci create mode 100644 cocci_refactor/range.cocci create mode 100644 cocci_refactor/remove_unused.cocci create mode 100644 cocci_refactor/test.cocci create mode 100644 cocci_refactor/use_ctxt.cocci create mode 100644 cocci_refactor/use_ctxt_access.cocci create mode 100644 cocci_refactor/use_hypstate.cocci create mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci create mode 100644 cocci_refactor/vcpu_declr.cocci create mode 100644 cocci_refactor/vcpu_flags.cocci create mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci create mode 100644 cocci_refactor/vcpu_hyp_state.cocci create mode 100644 cocci_refactor/vgic3_cpu.cocci diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci new file mode 100644 index 000000000000..203644944ace --- /dev/null +++ b/cocci_refactor/add_ctxt.cocci @@ -0,0 +1,169 @@ +// + +/* +spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place +*/ + + +@exists@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier fc; +@@ +<... +( + struct kvm_vcpu *vcpu = NULL; ++ struct kvm_cpu_context *vcpu_ctxt; +| + struct kvm_vcpu *vcpu = ...; ++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +| + struct kvm_vcpu *vcpu; ++ struct kvm_cpu_context *vcpu_ctxt; +) +<... + vcpu = ...; ++ vcpu_ctxt = &vcpu_ctxt(vcpu); +...> +fc(..., vcpu, ...) +...> + +@exists@ +identifier func != {kvm_arch_vcpu_run_pid_change}; +identifier fc != {vcpu_ctxt}; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +func(..., struct kvm_vcpu *vcpu, ...) { ++ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +expression a, b; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +iterator name kvm_for_each_vcpu; +identifier fc; +@@ +kvm_for_each_vcpu(a, vcpu, b) + { ++ vcpu_ctxt = &vcpu_ctxt(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +identifier vcpu_ctxt, vcpu; +iterator name kvm_for_each_vcpu; +type T; +identifier x; +statement S1, S2; +@@ +kvm_for_each_vcpu(...) + { +- vcpu_ctxt = &vcpu_ctxt(vcpu); +... when != S1 ++ vcpu_ctxt = &vcpu_ctxt(vcpu); + S2 + ... when any + } + +@ +disable optional_qualifier +exists +@ +identifier vcpu; +identifier vcpu_ctxt; +@@ +<... + const struct kvm_vcpu *vcpu = ...; +- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +...> + +@disable optional_qualifier@ +identifier func, vcpu; +identifier vcpu_ctxt; +@@ +func(..., const struct kvm_vcpu *vcpu, ...) { +- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ++ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); +... + } + +@exists@ +expression r1, r2; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +( +- vcpu_gp_regs(vcpu) ++ ctxt_gp_regs(vcpu_ctxt) +| +- vcpu_spsr_abt(vcpu) ++ ctxt_spsr_abt(vcpu_ctxt) +| +- vcpu_spsr_und(vcpu) ++ ctxt_spsr_und(vcpu_ctxt) +| +- vcpu_spsr_irq(vcpu) ++ ctxt_spsr_irq(vcpu_ctxt) +| +- vcpu_spsr_fiq(vcpu) ++ ctxt_spsr_fiq(vcpu_ctxt) +| +- vcpu_fp_regs(vcpu) ++ ctxt_fp_regs(vcpu_ctxt) +| +- __vcpu_sys_reg(vcpu, r1) ++ ctxt_sys_reg(vcpu_ctxt, r1) +| +- __vcpu_read_sys_reg(vcpu, r1) ++ __ctxt_read_sys_reg(vcpu_ctxt, r1) +| +- __vcpu_write_sys_reg(vcpu, r1, r2) ++ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2) +| +- __vcpu_write_spsr(vcpu, r1) ++ __ctxt_write_spsr(vcpu_ctxt, r1) +| +- __vcpu_write_spsr_abt(vcpu, r1) ++ __ctxt_write_spsr_abt(vcpu_ctxt, r1) +| +- __vcpu_write_spsr_und(vcpu, r1) ++ __ctxt_write_spsr_und(vcpu_ctxt, r1) +| +- vcpu_pc(vcpu) ++ ctxt_pc(vcpu_ctxt) +| +- vcpu_cpsr(vcpu) ++ ctxt_cpsr(vcpu_ctxt) +| +- vcpu_mode_is_32bit(vcpu) ++ ctxt_mode_is_32bit(vcpu_ctxt) +| +- vcpu_set_thumb(vcpu) ++ ctxt_set_thumb(vcpu_ctxt) +| +- vcpu_get_reg(vcpu, r1) ++ ctxt_get_reg(vcpu_ctxt, r1) +| +- vcpu_set_reg(vcpu, r1, r2) ++ ctxt_set_reg(vcpu_ctxt, r1, r2) +) + + +/* Handles one case of a call within a call. */ +@@ +expression r1, r2; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +@@ +- vcpu_pc(vcpu) ++ ctxt_pc(vcpu_ctxt) + +// diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci new file mode 100644 index 000000000000..e8635d0e8f57 --- /dev/null +++ b/cocci_refactor/add_hypstate.cocci @@ -0,0 +1,125 @@ +// + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" +spatch --sp-file add_hypstate.cocci $FILES --in-place +*/ + +@exists@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier fc; +@@ +<... +( + struct kvm_vcpu *vcpu = NULL; ++ struct vcpu_hyp_state *hyps; +| + struct kvm_vcpu *vcpu = ...; ++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +| + struct kvm_vcpu *vcpu; ++ struct vcpu_hyp_state *hyps; +) +<... + vcpu = ...; ++ hyps = &hyp_state(vcpu); +...> +fc(..., vcpu, ...) +...> + +@exists@ +identifier func != {kvm_arch_vcpu_run_pid_change}; +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier fc; +@@ +func(..., struct kvm_vcpu *vcpu, ...) { ++ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +expression a, b; +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +iterator name kvm_for_each_vcpu; +identifier fc; +@@ +kvm_for_each_vcpu(a, vcpu, b) + { ++ hyps = &hyp_state(vcpu); +<+... +fc(..., vcpu, ...) +...+> + } + +@@ +identifier hyps, vcpu; +iterator name kvm_for_each_vcpu; +statement S1, S2; +@@ +kvm_for_each_vcpu(...) + { +- hyps = &hyp_state(vcpu); +... when != S1 ++ hyps = &hyp_state(vcpu); + S2 + ... when any + } + +@ +disable optional_qualifier +exists +@ +identifier vcpu, hyps; +@@ +<... + const struct kvm_vcpu *vcpu = ...; +- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); ++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +...> + + +@@ +identifier func, vcpu, hyps; +@@ +func(..., const struct kvm_vcpu *vcpu, ...) { +- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); ++ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); +... + } + +@exists@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +@@ +( +- vcpu_hcr_el2(vcpu) ++ hyp_state_hcr_el2(hyps) +| +- vcpu_mdcr_el2(vcpu) ++ hyp_state_mdcr_el2(hyps) +| +- vcpu_vsesr_el2(vcpu) ++ hyp_state_vsesr_el2(hyps) +| +- vcpu_fault(vcpu) ++ hyp_state_fault(hyps) +| +- vcpu_flags(vcpu) ++ hyp_state_flags(hyps) +| +- vcpu_has_sve(vcpu) ++ hyp_state_has_sve(hyps) +| +- vcpu_has_ptrauth(vcpu) ++ hyp_state_has_ptrauth(hyps) +| +- kvm_arm_vcpu_sve_finalized(vcpu) ++ kvm_arm_hyp_state_sve_finalized(hyps) +) + +// diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci new file mode 100644 index 000000000000..af7974e3a502 --- /dev/null +++ b/cocci_refactor/hyp_ctxt.cocci @@ -0,0 +1,38 @@ +// Remove vcpu if all we're using is hypstate and ctxt + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")" +spatch --sp-file hyp_ctxt.cocci $FILES --in-place; +*/ + +// + +@remove@ +identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_"; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +@@ +func(..., +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps +,...) { +?- struct vcpu_hyp_state *hyps_remove = ...; +?- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + } + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier remove.func; +@@ + func( +- vcpu ++ vcpu_ctxt, vcpu_hyps + , ...) + +// \ No newline at end of file diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci new file mode 100644 index 000000000000..d99b9ee30657 --- /dev/null +++ b/cocci_refactor/range.cocci @@ -0,0 +1,50 @@ + + +// + +/* + FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES +*/ + +@initialize:python@ +@@ +starts = ("start", "begin", "from", "floor", "addr", "kaddr") +ends = ("size", "length", "len") + +//ends = ("end", "to", "ceiling", "size", "length", "len") + + +@start_end@ +identifier f; +type A, B; +identifier start, end; +parameter list[n] ps; +@@ +f(ps, A start, B end, ...) { +... +} + +@script:python@ +start << start_end.start; +end << start_end.end; +ta << start_end.A; +tb << start_end.B; +@@ + +if ta != tb and tb != "size_t": + cocci.include_match(False) +elif not any(x in start for x in starts) and not any(x in end for x in ends): + cocci.include_match(False) + +@@ +identifier f = start_end.f; +expression list[start_end.n] xs; +expression a, b; +@@ +( +* f(xs, a, a, ...) +| +* f(xs, a, a - b, ...) +) + +// \ No newline at end of file diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci new file mode 100644 index 000000000000..c06278398198 --- /dev/null +++ b/cocci_refactor/remove_unused.cocci @@ -0,0 +1,69 @@ +// + +/* +spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff +*/ + +@@ +identifier hyps; +@@ +{ +... +( +- struct vcpu_hyp_state *hyps = ...; +| +- struct vcpu_hyp_state *hyps; +) +... when != hyps + when != if (...) { <+...hyps...+> } +?- hyps = ...; +... when != hyps + when != if (...) { <+...hyps...+> } +} + +@@ +identifier vcpu_ctxt; +@@ +{ +... +( +- struct kvm_cpu_context *vcpu_ctxt = ...; +| +- struct kvm_cpu_context *vcpu_ctxt; +) +... when != vcpu_ctxt + when != if (...) { <+...vcpu_ctxt...+> } +?- vcpu_ctxt = ...; +... when != vcpu_ctxt + when != if (...) { <+...vcpu_ctxt...+> } +} + +@@ +identifier x; +identifier func; +statement S; +@@ +func(...) + { +... +struct kvm_cpu_context *x = ...; ++ +S +... + } + +@@ +identifier x; +identifier func; +statement S; +@@ +func(...) + { +... +struct vcpu_hyp_state *x = ...; ++ +S +... + } + +// diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci new file mode 100644 index 000000000000..5eb685240ce7 --- /dev/null +++ b/cocci_refactor/test.cocci @@ -0,0 +1,20 @@ +/* + FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES + +*/ + +@r@ +identifier fn; +@@ +fn(...) { + hello; + ... +} + +@@ +identifier r.fn; +@@ +static fn(...) { ++ world; + ... +} diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci new file mode 100644 index 000000000000..f3f961f567fd --- /dev/null +++ b/cocci_refactor/use_ctxt.cocci @@ -0,0 +1,32 @@ +// +/* +spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place +spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place +*/ + +@remove_vcpu@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier ctxt_remove; +identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)"; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt +, ...) { +- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } +} + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +identifier func = remove_vcpu.func; +@@ +func( +- vcpu ++ vcpu_ctxt + , ...) + +// diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci new file mode 100644 index 000000000000..74f94141e662 --- /dev/null +++ b/cocci_refactor/use_ctxt_access.cocci @@ -0,0 +1,39 @@ +// + +/* +spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place +*/ + +@@ +constant r; +@@ +- __ctxt_sys_reg(&vcpu->arch.ctxt, r) ++ &__vcpu_sys_reg(vcpu, r) + +@@ +identifier r; +@@ +- vcpu->arch.ctxt.regs.r ++ vcpu_gp_regs(vcpu)->r + +@@ +identifier r; +@@ +- vcpu->arch.ctxt.fp_regs.r ++ vcpu_fp_regs(vcpu)->r + +@@ +identifier r; +fresh identifier accessor = "vcpu_" ## r; +@@ +- &vcpu->arch.ctxt.r ++ accessor(vcpu) + +@@ +identifier r; +fresh identifier accessor = "vcpu_" ## r; +@@ +- vcpu->arch.ctxt.r ++ *accessor(vcpu) + +// \ No newline at end of file diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci new file mode 100644 index 000000000000..f685149de748 --- /dev/null +++ b/cocci_refactor/use_hypstate.cocci @@ -0,0 +1,63 @@ +// + +/* +FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" +spatch --sp-file use_hypstate.cocci $FILES --in-place +*/ + + +@remove_vcpu_hyps@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier func; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct vcpu_hyp_state *hyps +, ...) { +- struct vcpu_hyp_state *hyps_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } +} + +@@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier func = remove_vcpu_hyps.func; +@@ +func( +- vcpu ++ hyps + , ...) + +@remove_vcpu_hyps_ctxt@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +identifier func; +@@ +func( +- struct kvm_vcpu *vcpu ++ struct vcpu_hyp_state *hyps +, ...) { +- struct vcpu_hyp_state *hyps_remove = ...; +- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + when != if (...) { <+...vcpu...+> } + when != ctxt_remove + when != if (...) { <+...ctxt_remove...+> } +} + +@@ +identifier vcpu; +fresh identifier hyps = vcpu ## "_hyps"; +identifier func = remove_vcpu_hyps_ctxt.func; +@@ +func( +- vcpu ++ hyps + , ...) + +// diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci new file mode 100644 index 000000000000..69b3a000de4e --- /dev/null +++ b/cocci_refactor/vcpu_arch_ctxt.cocci @@ -0,0 +1,13 @@ +// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers --dir arch/arm64 + +// +@@ +identifier vcpu; +@@ +( +- vcpu->arch.ctxt.regs ++ vcpu_gp_regs(vcpu) +| +- vcpu->arch.ctxt.fp_regs ++ vcpu_fp_regs(vcpu) +) diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci new file mode 100644 index 000000000000..59cd46bd6b2d --- /dev/null +++ b/cocci_refactor/vcpu_declr.cocci @@ -0,0 +1,59 @@ + +/* +FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file vcpu_declr.cocci $FILES --in-place +*/ + +// + +@@ +identifier vcpu; +expression E; +@@ +<... +- struct kvm_vcpu *vcpu; ++ struct kvm_vcpu *vcpu = E; + +- vcpu = E; +...> + + +/* +@@ +identifier vcpu; +identifier f1, f2; +@@ +f1(...) +{ +- struct kvm_vcpu *vcpu = NULL; ++ struct kvm_vcpu *vcpu; +... when != f2(..., vcpu, ...) +} +*/ + +/* +@find_after@ +identifier vcpu; +position p; +identifier f; +@@ +<... + struct kvm_vcpu *vcpu@p; + ... when != vcpu = ...; + f(..., vcpu, ...); +...> + +@@ +identifier vcpu; +expression E; +position p != find_after.p; +@@ +<... +- struct kvm_vcpu *vcpu@p; ++ struct kvm_vcpu *vcpu = E; + ... +- vcpu = E; +...> + +*/ + +// diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci new file mode 100644 index 000000000000..609bb7bd7bd0 --- /dev/null +++ b/cocci_refactor/vcpu_flags.cocci @@ -0,0 +1,10 @@ +// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers --dir arch/arm64 + +// +@@ +expression vcpu; +@@ + +- vcpu->arch.flags ++ vcpu_flags(vcpu) +// \ No newline at end of file diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci new file mode 100644 index 000000000000..506b56f7216f --- /dev/null +++ b/cocci_refactor/vcpu_hyp_accessors.cocci @@ -0,0 +1,35 @@ +// + +/* +spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place +*/ + +@find_defines@ +identifier macro; +identifier vcpu; +position p; +@@ +#define macro(vcpu) vcpu@p + +@@ +identifier vcpu; +position p != find_defines.p; +@@ +( +- vcpu@p->arch.hcr_el2 ++ vcpu_hcr_el2(vcpu) +| +- vcpu@p->arch.mdcr_el2 ++ vcpu_mdcr_el2(vcpu) +| +- vcpu@p->arch.vsesr_el2 ++ vcpu_vsesr_el2(vcpu) +| +- vcpu@p->arch.fault ++ vcpu_fault(vcpu) +| +- vcpu@p->arch.flags ++ vcpu_flags(vcpu) +) + +// diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci new file mode 100644 index 000000000000..3005a6f11871 --- /dev/null +++ b/cocci_refactor/vcpu_hyp_state.cocci @@ -0,0 +1,30 @@ +// + +// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers --dir arch/arm64 --very-quiet --in-place + +@@ +expression vcpu; +@@ +- vcpu->arch. ++ vcpu->arch.hyp_state. +( + hcr_el2 +| + mdcr_el2 +| + vsesr_el2 +| + fault +| + flags +| + sysregs_loaded_on_cpu +) + +@@ +identifier arch; +@@ +- arch.fault ++ arch.hyp_state.fault + +// \ No newline at end of file diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci new file mode 100644 index 000000000000..f7495b2e49cb --- /dev/null +++ b/cocci_refactor/vgic3_cpu.cocci @@ -0,0 +1,118 @@ +// + +/* +spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place +*/ + + +@@ +identifier vcpu; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +@@ +( +- kvm_vcpu_sys_get_rt ++ kvm_hyp_state_sys_get_rt +| +- kvm_vcpu_get_esr ++ kvm_hyp_state_get_esr +) +- (vcpu) ++ (vcpu_hyps) + +@add_cpu_if@ +identifier func; +identifier c; +@@ +int func( +- struct kvm_vcpu *vcpu ++ struct vgic_v3_cpu_if *cpu_if + , ...) +{ +<+... +- vcpu->arch.vgic_cpu.vgic_v3.c ++ cpu_if->c +...+> +} + +@@ +identifier func = add_cpu_if.func; +@@ + func( +- vcpu ++ cpu_if + , ... + ) + + +@add_vgic_ctxt_hyps@ +identifier func; +@@ +void func( +- struct kvm_vcpu *vcpu ++ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps + , ...) { +?- struct vcpu_hyp_state *vcpu_hyps = ...; +?- struct kvm_cpu_context *vcpu_ctxt = ...; + ... + } + +@@ +identifier func = add_vgic_ctxt_hyps.func; +@@ + func( +- vcpu, ++ cpu_if, vcpu_ctxt, vcpu_hyps, + ... + ) + + +@find_calls@ +identifier fn; +type a, b; +@@ +- void (*fn)(struct kvm_vcpu *, a, b); ++ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b); + +@@ +identifier fn = find_calls.fn; +identifier a, b; +@@ +- fn(vcpu, a, b); ++ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b); + +@@ +@@ +int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { ++ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; +... +} + +@remove@ +identifier func; +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier hyps_remove; +identifier ctxt_remove; +@@ +func(..., +- struct kvm_vcpu *vcpu ++ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps +,...) { +?- struct vcpu_hyp_state *hyps_remove = ...; +?- struct kvm_cpu_context *ctxt_remove = ...; +... when != vcpu + } + +@@ +identifier vcpu; +fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; +fresh identifier vcpu_hyps = vcpu ## "_hyps"; +identifier remove.func; +@@ + func( +- vcpu ++ vcpu_ctxt, vcpu_hyps + , ...) + +// From patchwork Fri Sep 24 12:53:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4EFCC433F5 for ; Fri, 24 Sep 2021 13:23:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CAF236105A for ; Fri, 24 Sep 2021 13:23:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345666AbhIXNY7 (ORCPT ); Fri, 24 Sep 2021 09:24:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344575AbhIXNYv (ORCPT ); Fri, 24 Sep 2021 09:24:51 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 062F6C08EAEC for ; Fri, 24 Sep 2021 05:54:12 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id h15-20020aa7de0f000000b003d02f9592d6so10071206edv.17 for ; Fri, 24 Sep 2021 05:54:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NDvJPFQrogpAsPTwpI9wbd6GY9uLrhdDShc06jG7Kb4=; b=ofmwdIYsCOT3dwUKE9fs1S0wlAahS1/K44MZDjnE5gAUoqkmdR2JLYgcKHIfptReIX dndQywZVRlFuQ3eOGRYinLFu/MngUVLE+8LX51SRkzUM7CBa6Rel5x8axNZa4uL1zp2U wJ0d3vM7NVwH6J3XmsmRgs8W0rTFnsn3wb9mC3nhOrcNH1NR5ZUwDA9tPRh1JDDH4f7l zQvYCm3Kkr3QWGWjZ1LC7iqi68tZn7ouspyCGugcVjLxryYO+CyIMSNqEVdSrV7xuY4w ABuCGZ9OVVMOUxX2tJIa+3AE5Tc9C+idHAo8MF2y4xXovo527JFfadzqGEw6vk6+5pL5 tqKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NDvJPFQrogpAsPTwpI9wbd6GY9uLrhdDShc06jG7Kb4=; b=5YxiuwLMqUJDOygUm/vPlHJNU/BCkApCLkcAnEKSQDmPKPsoA+yY//AeUH7B+74gpS Tusu70khNihtkiwwMQjs/aHtINgIQnR3NTOw3icYlTWVQ3J23OPUsBx+FzPEgkhcVEBs yNmwpEhLdNAGyyb+SZcxsAb2P0Ak41114rU5hobv7Mv74vYBa0Er7LHvU/GSOYzzncum btXURQbfshDVtvn6J4APJ7vJGaZ1B8Fbp8Q54HVQ79iZw6rA9a86G4TlIyTHN9Q7mHsd VQvgVFizTMYtp+ITsP8ST0gkFlJffay6lYU1EnRpRUV8sMahB3xxv05EsYQbRbsR/Myp MUcQ== X-Gm-Message-State: AOAM530OvmzgdC7O5G0mlBuwDJvbtR57bslbMjduvKWZ55qIqJNy5j36 zlYaaGmPLnvj6+k4JLVnOkajzIgpqQ== X-Google-Smtp-Source: ABdhPJxKKemydyBQbjYJSypMRsSre41xZtGQ0H5ytHahnnYcVhn4lCAlWI7RlpAOylFBNu1TXAL96enMEA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a50:e10d:: with SMTP id h13mr4772015edl.77.1632488050517; Fri, 24 Sep 2021 05:54:10 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:33 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-5-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 04/30] KVM: arm64: remove unused parameters and asm offsets From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove unused vcpu function parameters and asm-offset definitions. Cleaner code and simplifies future refactoring. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 4 ++-- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 ++-- 4 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..2e2b60a1b6c7 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -66,8 +66,8 @@ void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ -void __timer_enable_traps(struct kvm_vcpu *vcpu); -void __timer_disable_traps(struct kvm_vcpu *vcpu); +void __timer_enable_traps(void); +void __timer_disable_traps(void); #endif #ifdef __KVM_NVHE_HYPERVISOR__ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 0cb34ccb6e73..c2cc3a2813e6 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -109,7 +109,6 @@ int main(void) DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); - DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1])); DEFINE(CPU_APIBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1])); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..9296d7108f93 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -217,7 +217,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __activate_traps(vcpu); __hyp_vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); + __timer_enable_traps(); __debug_switch_to_guest(vcpu); @@ -230,7 +230,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); - __timer_disable_traps(vcpu); + __timer_disable_traps(); __hyp_vgic_save_state(vcpu); __deactivate_traps(vcpu); @@ -272,7 +272,7 @@ void __noreturn hyp_panic(void) vcpu = host_ctxt->__hyp_running_vcpu; if (vcpu) { - __timer_disable_traps(vcpu); + __timer_disable_traps(); __deactivate_traps(vcpu); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index 9072e71693ba..7b2a23ccdb0a 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -19,7 +19,7 @@ void __kvm_timer_set_cntvoff(u64 cntvoff) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __timer_disable_traps(struct kvm_vcpu *vcpu) +void __timer_disable_traps(void) { u64 val; @@ -33,7 +33,7 @@ void __timer_disable_traps(struct kvm_vcpu *vcpu) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __timer_enable_traps(struct kvm_vcpu *vcpu) +void __timer_enable_traps(void) { u64 val; From patchwork Fri Sep 24 12:53:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57895C433F5 for ; Fri, 24 Sep 2021 13:23:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 354546103B for ; Fri, 24 Sep 2021 13:23:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345112AbhIXNZA (ORCPT ); Fri, 24 Sep 2021 09:25:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345219AbhIXNYv (ORCPT ); Fri, 24 Sep 2021 09:24:51 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDE26C08EAED for ; Fri, 24 Sep 2021 05:54:13 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id e1-20020adfa741000000b0015e424fdd01so8005898wrd.11 for ; Fri, 24 Sep 2021 05:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=c3qdDK6CwCMhdcn2tEJ6TpJKVu10uhj+XXDIPCtJjb0=; b=psNFg6nCmH9EU5Pajcz2UKWfcUW6CIkoTg6ob88wrS/HBXB3AkOhMbUc4AFLeMA0Sp 7lu9JHDpVOWOXTNsn+YcIqR2zulAavdwwQM2YXe6C97wYk874q3MJIcDhun/XPLkM3pd aTmQUfLBonBQGjH0b4Xbo57FTjmUG/w7C0EUsuWgalUsbrG9wGYx1riQpbFpAJkE9a9T ISA5Y4eXtmuXhHAdn7YguY2m9S7EzU47PoSqQq64YqMpVEJUtiCQNauB8oza5MU9Z9Pp DUh1ObVuEjj9HZAXz32oTdLCLWK0TGm4SzdzVB1bDoeIwUh+cRZCmAJP4TcEm0Lov7cg ULBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=c3qdDK6CwCMhdcn2tEJ6TpJKVu10uhj+XXDIPCtJjb0=; b=OGQY6Y724f3O8R1ewFQCU9QBsWIoc4IYEGfwYu3QRbnZ4Eu0alRCcK9UsEE88w2Sz+ sV6n9BIU4Id1i0lODUcc86VXWT97MU1UE1HzE+IIzqixD4yluWyK2QnKIGIHvEJDXt0w Zloy9bYQjgkLWQZlVC1nSMCqvVmnz27/mp92JO7+uvD+/DK4Ckyt4HzzChnEQSjYR+dV KDF1Y2rDY8yKJd+hiWr+mdh9HW+OeNNgFXfQ8l/veeXGBlFyp70PBjvdLxwxqWY6iAOT UfmYzjiGIYP8r8jAOHy8yYYc4Ghg2mwTTs+0j4BNHqArWLIl3juwMr54uGN80S7f1iQ/ vK7w== X-Gm-Message-State: AOAM532u+ZEyVSBR8eONldJHxb+TWC2Mo2U8tAaWIUxWLTvs0KiSl2/k M9vAMw3E/dvI02747tCKH9rxLTWf3A== X-Google-Smtp-Source: ABdhPJyHSTQAdyddoCnAQ1t8SsXdoW5w2rUjyZviDC87bdJ5+MrSTkFU5/hlW9/ETaN0jl9llsuMltIXvQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c766:: with SMTP id x6mr1928628wmk.53.1632488052437; Fri, 24 Sep 2021 05:54:12 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:34 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-6-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 05/30] KVM: arm64: add accessors for kvm_cpu_context From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add accessors to get/set elements of struct kvm_cpu_context. Simplifies future refactoring, and makes the code more consistent. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 53 ++++++++++++++++++++++------ arch/arm64/include/asm/kvm_host.h | 18 +++++++++- arch/arm64/kvm/hyp/exception.c | 43 +++++++++++++++++----- 3 files changed, 94 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 01b9857757f2..ad6e53cef1a4 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -127,19 +127,34 @@ static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) vcpu->arch.vsesr_el2 = vsesr; } +static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt) +{ + return (unsigned long *)&ctxt_gp_regs(ctxt)->pc; +} + static __always_inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu_gp_regs(vcpu)->pc; + return ctxt_pc(&vcpu_ctxt(vcpu)); +} + +static __always_inline unsigned long *ctxt_cpsr(const struct kvm_cpu_context *ctxt) +{ + return (unsigned long *)&ctxt_gp_regs(ctxt)->pstate; } static __always_inline unsigned long *vcpu_cpsr(const struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu_gp_regs(vcpu)->pstate; + return ctxt_cpsr(&vcpu_ctxt(vcpu)); +} + +static __always_inline bool ctxt_mode_is_32bit(const struct kvm_cpu_context *ctxt) +{ + return !!(*ctxt_cpsr(ctxt) & PSR_MODE32_BIT); } static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) { - return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT); + return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu)); } static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) @@ -150,27 +165,45 @@ static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) return true; } +static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt) +{ + *ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT; +} + static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu) { - *vcpu_cpsr(vcpu) |= PSR_AA32_T_BIT; + ctxt_set_thumb(&vcpu_ctxt(vcpu)); } /* - * vcpu_get_reg and vcpu_set_reg should always be passed a register number - * coming from a read of ESR_EL2. Otherwise, it may give the wrong result on - * AArch32 with banked registers. + * vcpu/ctxt_get_reg and vcpu/ctxt_set_reg should always be passed a register + * number coming from a read of ESR_EL2. Otherwise, it may give the wrong result + * on AArch32 with banked registers. */ +static __always_inline unsigned long +ctxt_get_reg(const struct kvm_cpu_context *ctxt, u8 reg_num) +{ + return (reg_num == 31) ? 0 : ctxt_gp_regs(ctxt)->regs[reg_num]; +} + +static __always_inline void +ctxt_set_reg(struct kvm_cpu_context *ctxt, u8 reg_num, unsigned long val) +{ + if (reg_num != 31) + ctxt_gp_regs(ctxt)->regs[reg_num] = val; +} + static __always_inline unsigned long vcpu_get_reg(const struct kvm_vcpu *vcpu, u8 reg_num) { - return (reg_num == 31) ? 0 : vcpu_gp_regs(vcpu)->regs[reg_num]; + return ctxt_get_reg(&vcpu_ctxt(vcpu), reg_num); + } static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num, unsigned long val) { - if (reg_num != 31) - vcpu_gp_regs(vcpu)->regs[reg_num] = val; + ctxt_set_reg(&vcpu_ctxt(vcpu), reg_num, val); } /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index adb21a7f0891..097e5f533af9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -446,7 +446,23 @@ struct kvm_vcpu_arch { #define vcpu_has_ptrauth(vcpu) false #endif -#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) +#define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt) + +/* VCPU Context accessors (direct) */ +#define ctxt_gp_regs(c) (&(c)->regs) +#define ctxt_spsr_abt(c) (&(c)->spsr_abt) +#define ctxt_spsr_und(c) (&(c)->spsr_und) +#define ctxt_spsr_irq(c) (&(c)->spsr_irq) +#define ctxt_spsr_fiq(c) (&(c)->spsr_fiq) +#define ctxt_fp_regs(c) (&(c)->fp_regs) + +/* VCPU Context accessors */ +#define vcpu_gp_regs(v) ctxt_gp_regs(&vcpu_ctxt(v)) +#define vcpu_spsr_abt(v) ctxt_spsr_abt(&vcpu_ctxt(v)) +#define vcpu_spsr_und(v) ctxt_spsr_und(&vcpu_ctxt(v)) +#define vcpu_spsr_irq(v) ctxt_spsr_irq(&vcpu_ctxt(v)) +#define vcpu_spsr_fiq(v) ctxt_spsr_fiq(&vcpu_ctxt(v)) +#define vcpu_fp_regs(v) ctxt_fp_regs(&vcpu_ctxt(v)) /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 11541b94b328..643c5844f684 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -18,43 +18,68 @@ #error Hypervisor code only! #endif -static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) +static inline u64 __ctxt_read_sys_reg(const struct kvm_cpu_context *vcpu_ctxt, int reg) { u64 val; if (__vcpu_read_sys_reg_from_cpu(reg, &val)) return val; - return __vcpu_sys_reg(vcpu, reg); + return ctxt_sys_reg(vcpu_ctxt, reg); } -static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) +static inline void __ctxt_write_sys_reg(struct kvm_cpu_context *vcpu_ctxt, u64 val, int reg) { if (__vcpu_write_sys_reg_to_cpu(val, reg)) return; - __vcpu_sys_reg(vcpu, reg) = val; + ctxt_sys_reg(vcpu_ctxt, reg) = val; } -static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr(struct kvm_cpu_context *vcpu_ctxt, u64 val) { write_sysreg_el1(val, SYS_SPSR); } -static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr_abt(struct kvm_cpu_context *vcpu_ctxt, u64 val) { if (has_vhe()) write_sysreg(val, spsr_abt); else - vcpu->arch.ctxt.spsr_abt = val; + *ctxt_spsr_abt(vcpu_ctxt) = val; } -static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) +static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) { if (has_vhe()) write_sysreg(val, spsr_und); else - vcpu->arch.ctxt.spsr_und = val; + *ctxt_spsr_und(vcpu_ctxt) = val; +} + +static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) +{ + return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); +} + +static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) +{ + __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); +} + +static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); +} + +static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); +} + +static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) +{ + __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } /* From patchwork Fri Sep 24 12:53:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12614C433EF for ; Fri, 24 Sep 2021 13:23:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED0386103B for ; Fri, 24 Sep 2021 13:23:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345078AbhIXNZM (ORCPT ); Fri, 24 Sep 2021 09:25:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344963AbhIXNYw (ORCPT ); Fri, 24 Sep 2021 09:24:52 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04006C08EAEF for ; Fri, 24 Sep 2021 05:54:16 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id s14-20020adff80e000000b001601b124f50so7999077wrp.5 for ; Fri, 24 Sep 2021 05:54:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GNJXCwSAbei7X+d2zrSyKPjr3s2IaQgd3QwFYy/hAIU=; b=FgWbPgZcipmM7hDQfUPsPO0ieDHPnx6Sh8kB3YNh80tJE+M54GZgHyARNdv3SkWMP9 FSzi0J6f9EIJPfjbRf1AFSSOW1Jo+/pz6JNlawA8iXeg5Fp9pgKDq/hCIO6xw1A4rWop 5Q/QA+HIOhZ9eNuVlB0g2c3HSZX9DjA6BCPGN8x+H/jI/mqgI8+MrF0sHOhjfaag8B0b 1E8Vfe1o34gnJVeKjdcqGdedCnwx9+69P0v2p9mdnzUuHO9uu1/r3/fEA9dhtQGc2nZM 6tkk2EPfnh1zCzD55dDl/i7LIDH7jHzhKj2v3/lcj8EjKNnCO4AHNo0s09MGfzgAvGuD Pi4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GNJXCwSAbei7X+d2zrSyKPjr3s2IaQgd3QwFYy/hAIU=; b=rSCHu57IblGfi7G0GGAQhTfv0lUbS8y1Wyv+xaQRa/dDn+1sHZnq3p4KVZsO0BIoWo YKoX0UDojzCTCtryD/CpGY590wzIZcr/t521aWkoOCPkLSVXPhfKGjl/E8qJhb0aEEKg FDgE4YMURCrBTALqEV2AZbzHyCaSpPBsRMuyy6D8YGCTDczN9YdhVzYaMnzxZ8Z9gRFj zPEweCnN4epnnVz9MXdHEw8mv3sKCMm2ydDGmqBb0iXz8vz8dtBc71jfCPkbyi9odqMr F8xUOBevV5QwV8G8co8tc6ytZ7MnPn1orXV9aa6LmjBD7ROteajRvyxwgtIZLrvDfu3d /HHg== X-Gm-Message-State: AOAM530aAPSPUjlJmnSKyjxSU2/8Gn22N7p1nSUCHKf0rr6XARf/ouKU ZBOuj7vsvLFdGTgiNg7tqpKPI1t01g== X-Google-Smtp-Source: ABdhPJyoJ4ezBcFbxEorS+BChh9gJ7Ip4MIZRiUmXZuwwPnp+ovEAnjm0lg/pp1rPexQj9d9ps3gIy+HdA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:896:: with SMTP id l22mr1903656wmp.173.1632488054622; Fri, 24 Sep 2021 05:54:14 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:35 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-7-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 06/30] KVM: arm64: COCCI: use_ctxt_access.cocci: use kvm_cpu_context accessors From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Some parts of the code access vcpu->arch.ctxt directly instead of using existing accessors. Refactor to use the existing accessors to make the code more consistent and to simplify future patches. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place Signed-off-by: Fuad Tabba --- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 28 +++++++++++----------- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 16 ++++++------- arch/arm64/kvm/reset.c | 10 ++++---- 5 files changed, 30 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5621020b28de..db135588236a 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -97,7 +97,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { - fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, + fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu), vcpu->arch.sve_state, vcpu->arch.sve_max_vl); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5cb4a1cd5603..c4429307a164 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -116,49 +116,49 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) KVM_REG_ARM_CORE_REG(regs.regs[30]): off -= KVM_REG_ARM_CORE_REG(regs.regs[0]); off /= 2; - return &vcpu->arch.ctxt.regs.regs[off]; + return &vcpu_gp_regs(vcpu)->regs[off]; case KVM_REG_ARM_CORE_REG(regs.sp): - return &vcpu->arch.ctxt.regs.sp; + return &vcpu_gp_regs(vcpu)->sp; case KVM_REG_ARM_CORE_REG(regs.pc): - return &vcpu->arch.ctxt.regs.pc; + return &vcpu_gp_regs(vcpu)->pc; case KVM_REG_ARM_CORE_REG(regs.pstate): - return &vcpu->arch.ctxt.regs.pstate; + return &vcpu_gp_regs(vcpu)->pstate; case KVM_REG_ARM_CORE_REG(sp_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1); + return &__vcpu_sys_reg(vcpu, SP_EL1); case KVM_REG_ARM_CORE_REG(elr_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1); + return &__vcpu_sys_reg(vcpu, ELR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1); + return &__vcpu_sys_reg(vcpu, SPSR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]): - return &vcpu->arch.ctxt.spsr_abt; + return vcpu_spsr_abt(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_UND]): - return &vcpu->arch.ctxt.spsr_und; + return vcpu_spsr_und(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_IRQ]): - return &vcpu->arch.ctxt.spsr_irq; + return vcpu_spsr_irq(vcpu); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_FIQ]): - return &vcpu->arch.ctxt.spsr_fiq; + return vcpu_spsr_fiq(vcpu); case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ... KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]): off -= KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]); off /= 4; - return &vcpu->arch.ctxt.fp_regs.vregs[off]; + return &vcpu_fp_regs(vcpu)->vregs[off]; case KVM_REG_ARM_CORE_REG(fp_regs.fpsr): - return &vcpu->arch.ctxt.fp_regs.fpsr; + return &vcpu_fp_regs(vcpu)->fpsr; case KVM_REG_ARM_CORE_REG(fp_regs.fpcr): - return &vcpu->arch.ctxt.fp_regs.fpcr; + return &vcpu_fp_regs(vcpu)->fpcr; default: return NULL; diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..9fa9cf71eefa 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -217,7 +217,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr); + &vcpu_fp_regs(vcpu)->fpsr); write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } @@ -276,7 +276,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (sve_guest) __hyp_sve_restore_guest(vcpu); else - __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs); + __fpsimd_restore_state(vcpu_fp_regs(vcpu)); /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index cce43bfe158f..9451206f512e 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -161,10 +161,10 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) if (!vcpu_el1_is_32bit(vcpu)) return; - vcpu->arch.ctxt.spsr_abt = read_sysreg(spsr_abt); - vcpu->arch.ctxt.spsr_und = read_sysreg(spsr_und); - vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq); - vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq); + *vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt); + *vcpu_spsr_und(vcpu) = read_sysreg(spsr_und); + *vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq); + *vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq); __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); @@ -178,10 +178,10 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) if (!vcpu_el1_is_32bit(vcpu)) return; - write_sysreg(vcpu->arch.ctxt.spsr_abt, spsr_abt); - write_sysreg(vcpu->arch.ctxt.spsr_und, spsr_und); - write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq); - write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq); + write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt); + write_sysreg(*vcpu_spsr_und(vcpu), spsr_und); + write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq); + write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq); write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index d37ebee085cf..ab1ef5313a3e 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -258,11 +258,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) /* Reset core registers */ memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; + memset(vcpu_fp_regs(vcpu), 0, sizeof(*vcpu_fp_regs(vcpu))); + *vcpu_spsr_abt(vcpu) = 0; + *vcpu_spsr_und(vcpu) = 0; + *vcpu_spsr_irq(vcpu) = 0; + *vcpu_spsr_fiq(vcpu) = 0; vcpu_gp_regs(vcpu)->pstate = pstate; /* Reset system registers */ From patchwork Fri Sep 24 12:53:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 185A6C433F5 for ; Fri, 24 Sep 2021 13:23:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED46261050 for ; Fri, 24 Sep 2021 13:23:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345444AbhIXNZT (ORCPT ); Fri, 24 Sep 2021 09:25:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344885AbhIXNYw (ORCPT ); Fri, 24 Sep 2021 09:24:52 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4E6FC08EAF1 for ; Fri, 24 Sep 2021 05:54:17 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id h4-20020a05620a244400b004334ede5036so31806808qkn.13 for ; Fri, 24 Sep 2021 05:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=F9WSvqQOfeqKzN2mfHECw3MDDtOFEm1amOCByz9riQE=; b=JRSKYLTOLSlmbQC3xeCnfYXH/BoVevCDKh9zuwXD8mBoe2ClKe1e/8DarBLh1KSTEL /SWP7ZB1s+omW/axGl+xSpdjcvqr9Mg9Yhu0b5to+SHqSoroNuCbXF9/kLlXJP64rFY7 PHyphaJQYjA0v3AH/APFk1cse6NCxyDfKg8biAiG9a6lNpflN77vnneb/ZDBjkDlY3+n 57bNJdFXgPfV7PMBIfylZoPIVLNcfBsAMVoocxyhLH4iWhYBYa7Zv4S51fFolY8H2EnP vSWKfmqN0jWKuB05qpa2axsQ6HoAviNloRqaokrucxkdlLXeGtmBnk0M2QzXMGkDzbRv Posg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=F9WSvqQOfeqKzN2mfHECw3MDDtOFEm1amOCByz9riQE=; b=byV1fQTtVakqnOov/UblJxHzBRB91NtZKP2FL52ioBJ+TD3+8XjILPYwz3OaAvIcww lSNy2Fi3DfFq7Vm2dDgP0MvA4od6wE8ksCnwS6szXcZnGA0eD4PhD811/UJeUsDjiCS5 FWipCLSXxC18nAmtvEluD8kZuZz0mKo4C7HHgjdAPsscEozSlm6WSVMYs3cNRzhjgmlF jp64H+TRjuo07SFqaFXlQE0GvzH5r1o3u3BaYGghxotuxQ0y1IWTluhV63PLgRe5IBOL vCkRy0YG39AvEa0PlD0D0CQrUdCITfEbLJ0w6d7Kzp4wA18wbJhxnNK02GwQ9Fdq+kIj JQ0Q== X-Gm-Message-State: AOAM531juP659/U9oX9lgsd9BiyI5CTWaFZnSi2DXhtF3OguIzJRGnAV fR8VBpb9lQRoIwsgiEXzVQo5LGTonw== X-Google-Smtp-Source: ABdhPJzUWtZc/ZGxRp4tbOMCu9NgC5hX7NyatmZoJbF1+qyb4Xq+KhMmWovyMdQyBstF4WdRB2SBHxWS9A== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1427:: with SMTP id o7mr427893qvx.45.1632488056707; Fri, 24 Sep 2021 05:54:16 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:36 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-8-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 07/30] KVM: arm64: COCCI: add_ctxt.cocci use_ctxt.cocci: reduce scope of functions to kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Many functions don't need access to the vcpu structure, but only the kvm_cpu_ctxt. Reduce their scope. This applies the semantic patches with the following commands: spatch --sp-file cocci_refactor/add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place spatch --sp-file cocci_refactor/use_ctxt.cocci --dir arch/arm64/kvm/hyp --include-headers --in-place spatch --sp-file cocci_refactor/use_ctxt.cocci --dir arch/arm64/kvm/hyp --include-headers --in-place This patch adds variables that may be unused. These will be removed at the end of this patch series. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/aarch32.c | 18 +++--- arch/arm64/kvm/hyp/exception.c | 60 ++++++++++-------- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 18 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 20 ++++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 31 +++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++ arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 13 ++-- arch/arm64/kvm/hyp/vgic-v3-sr.c | 71 +++++++++++++++------- arch/arm64/kvm/hyp/vhe/switch.c | 7 +++ arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 2 + 10 files changed, 155 insertions(+), 90 deletions(-) diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index f98cbe2626a1..27ebfff023ff 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = { */ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { + const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; int cond; @@ -59,7 +60,7 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) if (cond == 0xE) return true; - cpsr = *vcpu_cpsr(vcpu); + cpsr = *ctxt_cpsr(vcpu_ctxt); if (cond < 0) { /* This can happen in Thumb mode: examine IT state. */ @@ -93,10 +94,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) * * IT[7:0] -> CPSR[26:25],CPSR[15:10] */ -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) +static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) { unsigned long itbits, cond; - unsigned long cpsr = *vcpu_cpsr(vcpu); + unsigned long cpsr = *ctxt_cpsr(vcpu_ctxt); bool is_arm = !(cpsr & PSR_AA32_T_BIT); if (is_arm || !(cpsr & PSR_AA32_IT_MASK)) @@ -116,7 +117,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) cpsr |= cond << 13; cpsr |= (itbits & 0x1c) << (10 - 2); cpsr |= (itbits & 0x3) << 25; - *vcpu_cpsr(vcpu) = cpsr; + *ctxt_cpsr(vcpu_ctxt) = cpsr; } /** @@ -125,16 +126,17 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) */ void kvm_skip_instr32(struct kvm_vcpu *vcpu) { - u32 pc = *vcpu_pc(vcpu); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; - is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT); + is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT); if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu)) pc += 2; else pc += 4; - *vcpu_pc(vcpu) = pc; + *ctxt_pc(vcpu_ctxt) = pc; - kvm_adjust_itstate(vcpu); + kvm_adjust_itstate(vcpu_ctxt); } diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 643c5844f684..e23b9cedb043 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -99,13 +99,14 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from * MSB to LSB. */ -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, +static void enter_exception64(struct kvm_cpu_context *vcpu_ctxt, + unsigned long target_mode, enum exception_type type) { unsigned long sctlr, vbar, old, new, mode; u64 exc_offset; - mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT); + mode = *ctxt_cpsr(vcpu_ctxt) & (PSR_MODE_MASK | PSR_MODE32_BIT); if (mode == target_mode) exc_offset = CURRENT_EL_SP_ELx_VECTOR; @@ -118,18 +119,18 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, switch (target_mode) { case PSR_MODE_EL1h: - vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); - sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); - __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); + vbar = __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1); + sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); + __ctxt_write_sys_reg(vcpu_ctxt, *ctxt_pc(vcpu_ctxt), ELR_EL1); break; default: /* Don't do that */ BUG(); } - *vcpu_pc(vcpu) = vbar + exc_offset + type; + *ctxt_pc(vcpu_ctxt) = vbar + exc_offset + type; - old = *vcpu_cpsr(vcpu); + old = *ctxt_cpsr(vcpu_ctxt); new = 0; new |= (old & PSR_N_BIT); @@ -172,8 +173,8 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, new |= target_mode; - *vcpu_cpsr(vcpu) = new; - __vcpu_write_spsr(vcpu, old); + *ctxt_cpsr(vcpu_ctxt) = new; + __ctxt_write_spsr(vcpu_ctxt, old); } /* @@ -194,12 +195,13 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, * Here we manipulate the fields in order of the AArch32 SPSR_ELx layout, from * MSB to LSB. */ -static unsigned long get_except32_cpsr(struct kvm_vcpu *vcpu, u32 mode) +static unsigned long get_except32_cpsr(struct kvm_cpu_context *vcpu_ctxt, + u32 mode) { - u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); unsigned long old, new; - old = *vcpu_cpsr(vcpu); + old = *ctxt_cpsr(vcpu_ctxt); new = 0; new |= (old & PSR_AA32_N_BIT); @@ -288,27 +290,28 @@ static const u8 return_offsets[8][2] = { [7] = { 4, 4 }, /* FIQ, unused */ }; -static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) +static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, + u32 vect_offset) { - unsigned long spsr = *vcpu_cpsr(vcpu); + unsigned long spsr = *ctxt_cpsr(vcpu_ctxt); bool is_thumb = (spsr & PSR_AA32_T_BIT); - u32 sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + u32 sctlr = __ctxt_read_sys_reg(vcpu_ctxt, SCTLR_EL1); u32 return_address; - *vcpu_cpsr(vcpu) = get_except32_cpsr(vcpu, mode); - return_address = *vcpu_pc(vcpu); + *ctxt_cpsr(vcpu_ctxt) = get_except32_cpsr(vcpu_ctxt, mode); + return_address = *ctxt_pc(vcpu_ctxt); return_address += return_offsets[vect_offset >> 2][is_thumb]; /* KVM only enters the ABT and UND modes, so only deal with those */ switch(mode) { case PSR_AA32_MODE_ABT: - __vcpu_write_spsr_abt(vcpu, host_spsr_to_spsr32(spsr)); - vcpu_gp_regs(vcpu)->compat_lr_abt = return_address; + __ctxt_write_spsr_abt(vcpu_ctxt, host_spsr_to_spsr32(spsr)); + ctxt_gp_regs(vcpu_ctxt)->compat_lr_abt = return_address; break; case PSR_AA32_MODE_UND: - __vcpu_write_spsr_und(vcpu, host_spsr_to_spsr32(spsr)); - vcpu_gp_regs(vcpu)->compat_lr_und = return_address; + __ctxt_write_spsr_und(vcpu_ctxt, host_spsr_to_spsr32(spsr)); + ctxt_gp_regs(vcpu_ctxt)->compat_lr_und = return_address; break; } @@ -316,23 +319,24 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) if (sctlr & (1 << 13)) vect_offset += 0xffff0000; else /* always have security exceptions */ - vect_offset += __vcpu_read_sys_reg(vcpu, VBAR_EL1); + vect_offset += __ctxt_read_sys_reg(vcpu_ctxt, VBAR_EL1); - *vcpu_pc(vcpu) = vect_offset; + *ctxt_pc(vcpu_ctxt) = vect_offset; } static void kvm_inject_exception(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: - enter_exception32(vcpu, PSR_AA32_MODE_UND, 4); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; case KVM_ARM64_EXCEPT_AA32_IABT: - enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 12); break; case KVM_ARM64_EXCEPT_AA32_DABT: - enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16); + enter_exception32(vcpu_ctxt, PSR_AA32_MODE_ABT, 16); break; default: /* Err... */ @@ -342,7 +346,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): - enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync); + enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, + except_type_sync); break; default: /* @@ -361,6 +366,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) */ void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 4fdfeabefeb4..20dde9dbc11b 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -15,15 +15,16 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) { - if (vcpu_mode_is_32bit(vcpu)) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (ctxt_mode_is_32bit(vcpu_ctxt)) { kvm_skip_instr32(vcpu); } else { - *vcpu_pc(vcpu) += 4; - *vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK; + *ctxt_pc(vcpu_ctxt) += 4; + *ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK; } /* advance the singlestep state machine */ - *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; + *ctxt_cpsr(vcpu_ctxt) &= ~DBG_SPSR_SS; } /* @@ -32,13 +33,14 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) */ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { - *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); - vcpu_gp_regs(vcpu)->pstate = read_sysreg_el2(SYS_SPSR); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); + ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); kvm_skip_instr(vcpu); - write_sysreg_el2(vcpu_gp_regs(vcpu)->pstate, SYS_SPSR); - write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR); + write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR); } /* diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 9fa9cf71eefa..41c553a7b5dd 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -54,14 +54,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - __vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); + ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2) = read_sysreg(fpexc32_el2); } static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point * register accesses to EL2, however, the ARM ARM clearly states that @@ -215,15 +217,17 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu_fp_regs(vcpu)->fpsr); - write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); + &ctxt_fp_regs(vcpu_ctxt)->fpsr); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, ZCR_EL1), SYS_ZCR); } /* Check for an FPSIMD/SVE trap and handle as appropriate */ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); bool sve_guest, sve_host; u8 esr_ec; u64 reg; @@ -276,11 +280,12 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (sve_guest) __hyp_sve_restore_guest(vcpu); else - __fpsimd_restore_state(vcpu_fp_regs(vcpu)); + __fpsimd_restore_state(ctxt_fp_regs(vcpu_ctxt)); /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), + fpexc32_el2); vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; @@ -289,9 +294,10 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); - u64 val = vcpu_get_reg(vcpu, rt); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); /* * The normal sysreg handling code expects to see the traps, @@ -382,6 +388,7 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; @@ -412,6 +419,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) */ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 9451206f512e..c2668b85b67e 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -158,36 +158,39 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - *vcpu_spsr_abt(vcpu) = read_sysreg(spsr_abt); - *vcpu_spsr_und(vcpu) = read_sysreg(spsr_und); - *vcpu_spsr_irq(vcpu) = read_sysreg(spsr_irq); - *vcpu_spsr_fiq(vcpu) = read_sysreg(spsr_fiq); + *ctxt_spsr_abt(vcpu_ctxt) = read_sysreg(spsr_abt); + *ctxt_spsr_und(vcpu_ctxt) = read_sysreg(spsr_und); + *ctxt_spsr_irq(vcpu_ctxt) = read_sysreg(spsr_irq); + *ctxt_spsr_fiq(vcpu_ctxt) = read_sysreg(spsr_fiq); - __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); - __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); + ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); + ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); + ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; - write_sysreg(*vcpu_spsr_abt(vcpu), spsr_abt); - write_sysreg(*vcpu_spsr_und(vcpu), spsr_und); - write_sysreg(*vcpu_spsr_irq(vcpu), spsr_irq); - write_sysreg(*vcpu_spsr_fiq(vcpu), spsr_fiq); + write_sysreg(*ctxt_spsr_abt(vcpu_ctxt), spsr_abt); + write_sysreg(*ctxt_spsr_und(vcpu_ctxt), spsr_und); + write_sysreg(*ctxt_spsr_irq(vcpu_ctxt), spsr_irq); + write_sysreg(*ctxt_spsr_fiq(vcpu_ctxt), spsr_fiq); - write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); - write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2); + write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), + dbgvcr32_el2); } #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 9296d7108f93..d5780acab6c2 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -36,6 +36,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu); @@ -68,6 +69,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char __kvm_hyp_host_vector[]; u64 mdcr_el2, cptr; @@ -168,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -267,9 +270,11 @@ void __noreturn hyp_panic(void) u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu) { __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 87a54375bd6e..8dbc39026cc5 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -15,9 +15,9 @@ #include #include -static bool __is_be(struct kvm_vcpu *vcpu) +static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) { - if (vcpu_mode_is_32bit(vcpu)) + if (ctxt_mode_is_32bit(vcpu_ctxt)) return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE); @@ -36,6 +36,7 @@ static bool __is_be(struct kvm_vcpu *vcpu) */ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; phys_addr_t fault_ipa; @@ -68,19 +69,19 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) addr += fault_ipa - vgic->vgic_cpu_base; if (kvm_vcpu_dabt_iswrite(vcpu)) { - u32 data = vcpu_get_reg(vcpu, rd); - if (__is_be(vcpu)) { + u32 data = ctxt_get_reg(vcpu_ctxt, rd); + if (__is_be(vcpu_ctxt)) { /* guest pre-swabbed data, undo this for writel() */ data = __kvm_swab32(data); } writel_relaxed(data, addr); } else { u32 data = readl_relaxed(addr); - if (__is_be(vcpu)) { + if (__is_be(vcpu_ctxt)) { /* guest expects swabbed data */ data = __kvm_swab32(data); } - vcpu_set_reg(vcpu, rd, data); + ctxt_set_reg(vcpu_ctxt, rd, data); } __kvm_skip_instr(vcpu); diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 39f8f7f9227c..bdb03b8e50ab 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void) static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 esr = kvm_vcpu_get_esr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -673,6 +674,7 @@ static int __vgic_v3_clear_highest_active_priority(void) static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; int lr, grp; @@ -700,11 +702,11 @@ static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) lr_val |= ICH_LR_ACTIVE_BIT; __gic_v3_set_lr(lr_val, lr); __vgic_v3_set_active_priority(lr_prio, vmcr, grp); - vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); + ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); return; spurious: - vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS); + ctxt_set_reg(vcpu_ctxt, rt, ICC_IAR1_EL1_SPURIOUS); } static void __vgic_v3_clear_active_lr(int lr, u64 lr_val) @@ -731,7 +733,8 @@ static void __vgic_v3_bump_eoicount(void) static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 vid = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; int lr; @@ -754,7 +757,8 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 vid = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; u8 lr_prio, act_prio; int lr, grp; @@ -791,17 +795,20 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) vmcr |= ICH_VMCR_ENG0_MASK; @@ -813,7 +820,8 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) vmcr |= ICH_VMCR_ENG1_MASK; @@ -825,17 +833,20 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr)); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; /* Enforce BPR limiting */ @@ -852,7 +863,8 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u64 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); if (vmcr & ICH_VMCR_CBPR_MASK) @@ -872,6 +884,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; if (!__vgic_v3_get_group(vcpu)) @@ -879,12 +892,13 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) else val = __vgic_v3_read_ap1rn(n); - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (!__vgic_v3_get_group(vcpu)) __vgic_v3_write_ap0rn(val, n); @@ -895,47 +909,56 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 0); } static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 1); } static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 2); } static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 3); } static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 0); } static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 1); } static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 2); } static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 3); } static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; @@ -950,19 +973,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) lr_val = ICC_IAR1_EL1_SPURIOUS; spurious: - vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); + ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; - vcpu_set_reg(vcpu, rt, vmcr); + ctxt_set_reg(vcpu_ctxt, rt, vmcr); } static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); val <<= ICH_VMCR_PMR_SHIFT; val &= ICH_VMCR_PMR_MASK; @@ -974,12 +999,14 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; vtr = read_gicreg(ICH_VTR_EL2); @@ -996,12 +1023,13 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) /* CBPR */ val |= (vmcr & ICH_VMCR_CBPR_MASK) >> ICH_VMCR_CBPR_SHIFT; - vcpu_set_reg(vcpu, rt, val); + ctxt_set_reg(vcpu_ctxt, rt, val); } static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { - u32 val = vcpu_get_reg(vcpu, rt); + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & ICC_CTLR_EL1_CBPR_MASK) vmcr |= ICH_VMCR_CBPR_MASK; @@ -1018,6 +1046,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; @@ -1026,7 +1055,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) u32 sysreg; esr = kvm_vcpu_get_esr(vcpu); - if (vcpu_mode_is_32bit(vcpu)) { + if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu); return 1; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index b3229924d243..c2e443202f8e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -33,6 +33,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu); @@ -68,6 +69,7 @@ NOKPROBE_SYMBOL(__activate_traps); static void __deactivate_traps(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char vectors[]; /* kernel exception vectors */ ___deactivate_traps(vcpu); @@ -88,6 +90,7 @@ NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __activate_traps_common(vcpu); } @@ -107,6 +110,7 @@ void deactivate_traps_vhe_put(void) /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -160,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; local_daif_mask(); @@ -197,9 +202,11 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_ctxt = &vcpu_ctxt(vcpu); __deactivate_traps(vcpu); sysreg_restore_host_state_vhe(host_ctxt); diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 2a0b8c88d74f..37f56b4743d0 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); */ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -97,6 +98,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC54CC433FE for ; Fri, 24 Sep 2021 13:23:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C5A6D6103B for ; Fri, 24 Sep 2021 13:23:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345831AbhIXNZV (ORCPT ); Fri, 24 Sep 2021 09:25:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344160AbhIXNYx (ORCPT ); Fri, 24 Sep 2021 09:24:53 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65619C08EAF3 for ; Fri, 24 Sep 2021 05:54:20 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id x2-20020a5d54c2000000b0015dfd2b4e34so7997907wrv.6 for ; Fri, 24 Sep 2021 05:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gAyjNp3rW7TPfI7//eqCH4vOMM/PWKliQsJ25oNVe/Q=; b=Mtapfc+tk4ahVVDoTy22nyqa+M+lz+QDn+kzdiAfRLfPa2zdGHqMbcoY8BzxnnT4Ce cWum2sC4qzxNiAgDgN71HY0fc+KrGPIdKsmx7vvfZMJZsyzXStHuKXPDLgBEQBtzZo1g nPsZmTTFPMZ14TmrBHdBoW8H21JEKZa6+PIstkjoUe2sPE8VjIIt/1QrqI1Fyry3T3so Y5Ktxzq+xh00yhwaDrMW1n9KcfbArC+SN8UsAc9PskULI7RXut4Jq6alNNq4sbo4IrwU NgwauBKAv5Xu2CQoPPkiCDedlkfh/ESAHLGwHqOMNyFIAj6sIFg5yUH+TRCyNYyRwJaO EGfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gAyjNp3rW7TPfI7//eqCH4vOMM/PWKliQsJ25oNVe/Q=; b=f5sWP80cwXPpZJQ82CwtWUWhfDkluDcuPhSLMjpSL+GVOW/8CP7NohdgELOKKVfHEX ovCRM31UUMibypE/iLoAtR9smn8VTEVoHJFW3lylNY/kbWLIZOzNdM0M5TXntqsMgAEE ll+6KjKfnjb8tJrywG4FC/0c7sJV5/L/xFkIn4gdCU79PoZzrQhqQYwA056XDi8oCStp aFvIo3mCz+a0W8hBEJwtTEd/5JVB6OEq6MMY4MRWhiGsbw+6GD2oapGRy3IUhWJ2x4Ut dU0OAp0YUeyGamd5y2JlkPgKKbLaokQYhWdXL7IBkaUR+2QjIIRUBqAsiY2iXs6EvGe6 HfOQ== X-Gm-Message-State: AOAM531SdMTFnWqrIw7zgH84ePXd40XoqjBk/ckbGR1nWe6SY1T4uFvh A0HAkrRlaLqNNlBW3nIOELq/BFhgjw== X-Google-Smtp-Source: ABdhPJxVUcAxVdEdzqLvk3tG7+vsf8Jt1WEATen5QyS1LkA6oMHpdXD0zU5b5uRnwhULCsb+glF03ab0xA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:4642:: with SMTP id n2mr1902362wmo.39.1632488058999; Fri, 24 Sep 2021 05:54:18 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:37 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-9-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 08/30] KVM: arm64: add hypervisor state accessors From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Part of the state in vcpu_arch is hypervisor-specific. To isolate that state in future patches, start by creating accessors for this state rather than by dereferencing vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 097e5f533af9..280ee23dfc5a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -373,6 +373,13 @@ struct kvm_vcpu_arch { } steal; }; +/* Accessors for vcpu parameters related to the hypervistor state. */ +#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2 +#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2 +#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2 +#define vcpu_fault(vcpu) (vcpu)->arch.fault +#define vcpu_flags(vcpu) (vcpu)->arch.flags + /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ sve_ffr_offset((vcpu)->arch.sve_max_vl)) From patchwork Fri Sep 24 12:53:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0108C433EF for ; Fri, 24 Sep 2021 13:23:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 861ED6103B for ; Fri, 24 Sep 2021 13:23:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344675AbhIXNZW (ORCPT ); Fri, 24 Sep 2021 09:25:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345179AbhIXNYx (ORCPT ); Fri, 24 Sep 2021 09:24:53 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2BBEC08EAF5 for ; Fri, 24 Sep 2021 05:54:21 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id 7-20020ac85907000000b002a5391eff67so29559726qty.1 for ; Fri, 24 Sep 2021 05:54:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lLmYOVVwQKylUxYkkgcoQlIWOLYAg3wcffUYq5oEXuI=; b=kSDphT8+qk55V3xuhrjRp8h5TfZj8CnkbcVSNA8w4aeBCDoeMdnQg+fEymrm8an3xI JewOYNffyi4CuCvMlyKo3p+hem8ueUED7VVUID334/B8Jki+1iiytVTP8WNX0kYNLQHx DJMnE6L81oanerU6qdFbCV5yd278p4yl7gPATCgpFV0QXoKq3Ymewem9kh6AEJ64F4Lf cnnFxYNRkWqtu4cEfN95oB0Nzb+pdAr5SQ+yigDfG3ukatKznd+LIitIlS6vx7ngn5E7 sy2bqidVOpaWWXDmV4+TbfVqbQM2Q88Rp3iY7kzSWYDHV+hSbtXMEmFM5XJZnOMDqVyj I7/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lLmYOVVwQKylUxYkkgcoQlIWOLYAg3wcffUYq5oEXuI=; b=g1P4Wv22/TnGxsNiIn0XZfo8II2+8K+oSIm9nhfH+i4N2C2qA1vnqiZckEVE89DYmD +r4jyF8Ap8lBKVwgs6eLSrmra1I705owgMkUUUpkwaqSMydV+CMslR+cU2523eMOGjPd EXf58MkLkawYo6TwZq5M+JElSfpAJ5Pzuik66HG92GF2SW2Gr5OM+8Scswwdwxh1TNSz NG1G3Dzd0CuqGX2+ar5FWVlUHBwlOPJ0T+S6Oh3MWe6oonxqpi/mUS6qoDpYUoZH2c/O IwEimdvEP9kLWfEhqvzp0mGmwpxPzPY2jUGcbGxyIq1DST6+foNKPFmKDYCaZOOL4Uip 0+Pw== X-Gm-Message-State: AOAM533QG4g5B2JcbEqXRzeJI9FzjMS2g6SkNdkvbpizxumwi3dma3vG 6bb7VEQfViRVXEsfK4NuFWd1E608Pg== X-Google-Smtp-Source: ABdhPJwGkJ9wje6r9ZhTac8v8zFhECkCuGAOQ8vY17+yczUgDJXFv6d6aj6qxPJ9dKyb1gz9iQSd80BDxQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:e68:: with SMTP id jz8mr1703156qvb.21.1632488061042; Fri, 24 Sep 2021 05:54:21 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:38 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-10-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 09/30] KVM: arm64: COCCI: vcpu_hyp_accessors.cocci: use accessors for hypervisor state vcpu variables From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To simplify future refactoring, ensure that all access to the hypervisor state related fields in vcpu use the accessors created previously in this patch series, rather than by dereferencing the vcpu directly. The semantic patch is applied with the following command: spatch --sp-file cocci_refactor/vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 52 +++++++++++----------- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/debug.c | 28 ++++++------ arch/arm64/kvm/fpsimd.c | 20 ++++----- arch/arm64/kvm/guest.c | 2 +- arch/arm64/kvm/handle_exit.c | 2 +- arch/arm64/kvm/hyp/exception.c | 12 ++--- arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 6 +-- arch/arm64/kvm/hyp/include/hyp/switch.h | 32 ++++++------- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 4 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 8 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- arch/arm64/kvm/inject_fault.c | 10 ++--- arch/arm64/kvm/reset.c | 6 +-- arch/arm64/kvm/sys_regs.c | 4 +- 16 files changed, 97 insertions(+), 97 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index ad6e53cef1a4..7d09a9356d89 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -43,23 +43,23 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu->arch.hcr_el2 & HCR_RW); + return !(vcpu_hcr_el2(vcpu) & HCR_RW); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; + vcpu_hcr_el2(vcpu) = HCR_GUEST_FLAGS; if (is_kernel_in_hyp_mode()) - vcpu->arch.hcr_el2 |= HCR_E2H; + vcpu_hcr_el2(vcpu) |= HCR_E2H; if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) { /* route synchronous external abort exceptions to EL2 */ - vcpu->arch.hcr_el2 |= HCR_TEA; + vcpu_hcr_el2(vcpu) |= HCR_TEA; /* trap error record accesses */ - vcpu->arch.hcr_el2 |= HCR_TERR; + vcpu_hcr_el2(vcpu) |= HCR_TERR; } if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { - vcpu->arch.hcr_el2 |= HCR_FWB; + vcpu_hcr_el2(vcpu) |= HCR_FWB; } else { /* * For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C @@ -67,11 +67,11 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) * MMU gets turned on and do the necessary cache maintenance * then. */ - vcpu->arch.hcr_el2 |= HCR_TVM; + vcpu_hcr_el2(vcpu) |= HCR_TVM; } if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) - vcpu->arch.hcr_el2 &= ~HCR_RW; + vcpu_hcr_el2(vcpu) &= ~HCR_RW; /* * TID3: trap feature register accesses that we virtualise. @@ -79,52 +79,52 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) * are currently virtualised. */ if (!vcpu_el1_is_32bit(vcpu)) - vcpu->arch.hcr_el2 |= HCR_TID3; + vcpu_hcr_el2(vcpu) |= HCR_TID3; if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || vcpu_el1_is_32bit(vcpu)) - vcpu->arch.hcr_el2 |= HCR_TID2; + vcpu_hcr_el2(vcpu) |= HCR_TID2; } static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu) { - return (unsigned long *)&vcpu->arch.hcr_el2; + return (unsigned long *)&vcpu_hcr_el2(vcpu); } static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 &= ~HCR_TWE; + vcpu_hcr_el2(vcpu) &= ~HCR_TWE; if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || vcpu->kvm->arch.vgic.nassgireq) - vcpu->arch.hcr_el2 &= ~HCR_TWI; - else - vcpu->arch.hcr_el2 |= HCR_TWI; + vcpu_hcr_el2(vcpu) &= ~HCR_TWI; + else + vcpu_hcr_el2(vcpu) |= HCR_TWI; } static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 |= HCR_TWE; - vcpu->arch.hcr_el2 |= HCR_TWI; + vcpu_hcr_el2(vcpu) |= HCR_TWE; + vcpu_hcr_el2(vcpu) |= HCR_TWI; } static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); + vcpu_hcr_el2(vcpu) |= (HCR_API | HCR_APK); } static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) { - vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); + vcpu_hcr_el2(vcpu) &= ~(HCR_API | HCR_APK); } static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) { - return vcpu->arch.vsesr_el2; + return vcpu_vsesr_el2(vcpu); } static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) { - vcpu->arch.vsesr_el2 = vsesr; + vcpu_vsesr_el2(vcpu) = vsesr; } static __always_inline unsigned long *ctxt_pc(const struct kvm_cpu_context *ctxt) @@ -254,7 +254,7 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.esr_el2; + return vcpu_fault(vcpu).esr_el2; } static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) @@ -269,17 +269,17 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.far_el2; + return vcpu_fault(vcpu).far_el2; } static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu) { - return ((phys_addr_t)vcpu->arch.fault.hpfar_el2 & HPFAR_MASK) << 8; + return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8; } static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) { - return vcpu->arch.fault.disr_el1; + return vcpu_fault(vcpu).disr_el1; } static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) @@ -493,7 +493,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu) { - vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + vcpu_flags(vcpu) |= KVM_ARM64_INCREMENT_PC; } static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e720148232a0..5f0e2f9821ec 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -907,7 +907,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * the vcpu state. Note that this relies on __kvm_adjust_pc() * being preempt-safe on VHE. */ - if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION | + if (unlikely(vcpu_flags(vcpu) & (KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC))) kvm_call_hyp(__kvm_adjust_pc, vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index d5e79d7ee6e9..e7a5956fe648 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -87,8 +87,8 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; - vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | + vcpu_mdcr_el2(vcpu) = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; + vcpu_mdcr_el2(vcpu) |= (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | @@ -98,7 +98,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ - vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE; + vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDE; /* * Trap debug register access when one of the following is true: @@ -107,10 +107,10 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) * - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear). */ if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) || - !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA; + !(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) + vcpu_mdcr_el2(vcpu) |= MDCR_EL2_TDA; - trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2); + trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu_mdcr_el2(vcpu)); } /** @@ -154,7 +154,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) { - unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2; + unsigned long mdscr, orig_mdcr_el2 = vcpu_mdcr_el2(vcpu); trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug); @@ -214,7 +214,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1); vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state; - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; trace_kvm_arm_set_regset("BKPTS", get_num_brps(), &vcpu->arch.debug_ptr->dbg_bcr[0], @@ -231,11 +231,11 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) /* If KDE or MDE are set, perform a full save/restore cycle. */ if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; /* Write mdcr_el2 changes since vcpu_load on VHE systems */ - if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2) - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (has_vhe() && orig_mdcr_el2 != vcpu_mdcr_el2(vcpu)) + write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1)); } @@ -280,16 +280,16 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) && !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; /* Check if we have TRBE implemented and available at the host */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) && !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; } void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu) { - vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | + vcpu_flags(vcpu) &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | KVM_ARM64_DEBUG_STATE_SAVE_TRBE); } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index db135588236a..1871a267e2ed 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -74,16 +74,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { BUG_ON(!current->mm); - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | - KVM_ARM64_HOST_SVE_IN_USE | - KVM_ARM64_HOST_SVE_ENABLED); - vcpu->arch.flags |= KVM_ARM64_FP_HOST; + vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_HOST_SVE_IN_USE | + KVM_ARM64_HOST_SVE_ENABLED); + vcpu_flags(vcpu) |= KVM_ARM64_FP_HOST; if (test_thread_flag(TIF_SVE)) - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE; + vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_IN_USE; if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED; + vcpu_flags(vcpu) |= KVM_ARM64_HOST_SVE_ENABLED; } /* @@ -96,7 +96,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) { WARN_ON_ONCE(!irqs_disabled()); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) { fpsimd_bind_state_to_cpu(vcpu_fp_regs(vcpu), vcpu->arch.sve_state, vcpu->arch.sve_max_vl); @@ -120,7 +120,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) local_irq_save(flags); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) { if (guest_has_sve) { __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); @@ -139,14 +139,14 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) * for EL0. To avoid spurious traps, restore the trap state * seen by kvm_arch_vcpu_load_fp(): */ - if (vcpu->arch.flags & KVM_ARM64_HOST_SVE_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_ENABLED) sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN); else sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } update_thread_flag(TIF_SVE, - vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE); + vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE); local_irq_restore(flags); } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index c4429307a164..fc63e55db2f0 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -782,7 +782,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { - events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); + events->exception.serror_pending = !!(vcpu_hcr_el2(vcpu) & HCR_VSE); events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); if (events->exception.serror_pending && events->exception.serror_has_esr) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 6f48336b1d86..22e9f03fe901 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -126,7 +126,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu) switch (ESR_ELx_EC(esr)) { case ESR_ELx_EC_WATCHPT_LOW: - run->debug.arch.far = vcpu->arch.fault.far_el2; + run->debug.arch.far = vcpu_fault(vcpu).far_el2; fallthrough; case ESR_ELx_EC_SOFTSTP_LOW: case ESR_ELx_EC_BREAKPT_LOW: diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index e23b9cedb043..4514e345c26f 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -328,7 +328,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { - switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { + switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; @@ -343,7 +343,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) break; } } else { - switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { + switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, @@ -367,12 +367,12 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { + if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); - vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | + vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); - } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) { + } else if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) { kvm_skip_instr(vcpu); - vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC; + vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC; } } diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h index 4ebe9f558f3a..55735782d7e3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h @@ -132,7 +132,7 @@ static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; @@ -151,7 +151,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg; - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; @@ -162,7 +162,7 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) __debug_save_state(guest_dbg, guest_ctxt); __debug_restore_state(host_dbg, host_ctxt); - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) &= ~KVM_ARM64_DEBUG_DIRTY; } #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 41c553a7b5dd..370a8fb827be 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -45,10 +45,10 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) */ if (!system_supports_fpsimd() || vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST); - return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); + return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED); } /* Save the 32-bit only FPSIMD system register state */ @@ -94,7 +94,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); } static inline void __deactivate_traps_common(void) @@ -106,7 +106,7 @@ static inline void __deactivate_traps_common(void) static inline void ___activate_traps(struct kvm_vcpu *vcpu) { - u64 hcr = vcpu->arch.hcr_el2; + u64 hcr = vcpu_hcr_el2(vcpu); if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) hcr |= HCR_TVM; @@ -114,7 +114,7 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg(hcr, hcr_el2); if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); + write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2); } static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) @@ -125,9 +125,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) * the crucial bit is "On taking a vSError interrupt, * HCR_EL2.VSE is cleared to 0." */ - if (vcpu->arch.hcr_el2 & HCR_VSE) { - vcpu->arch.hcr_el2 &= ~HCR_VSE; - vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; + if (vcpu_hcr_el2(vcpu) & HCR_VSE) { + vcpu_hcr_el2(vcpu) &= ~HCR_VSE; + vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE; } } @@ -196,13 +196,13 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) u8 ec; u64 esr; - esr = vcpu->arch.fault.esr_el2; + esr = vcpu_fault(vcpu).esr_el2; ec = ESR_ELx_EC(esr); if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) return true; - return __get_fault_info(esr, &vcpu->arch.fault); + return __get_fault_info(esr, &vcpu_fault(vcpu)); } static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) @@ -237,7 +237,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (system_supports_sve()) { sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE; } else { sve_guest = false; sve_host = false; @@ -268,13 +268,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) } isb(); - if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) { if (sve_host) __hyp_sve_save_host(vcpu); else __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST; } if (sve_guest) @@ -287,7 +287,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), fpexc32_el2); - vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED; return true; } @@ -303,7 +303,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) * The normal sysreg handling code expects to see the traps, * let's not do anything here. */ - if (vcpu->arch.hcr_el2 & HCR_TVM) + if (vcpu_hcr_el2(vcpu) & HCR_TVM) return false; switch (sysreg) { @@ -421,7 +421,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); + vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR); if (ARM_SERROR_PENDING(*exit_code)) { u8 esr_ec = kvm_vcpu_trap_get_class(vcpu); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index c2668b85b67e..d49985e825cd 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -170,7 +170,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } @@ -188,7 +188,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), dbgvcr32_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..934737478d64 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -84,10 +84,10 @@ static void __debug_restore_trace(u64 trfcr_el1) void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE) __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); /* Disable and flush Self-Hosted Trace generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1); } @@ -98,9 +98,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) { - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_SPE) __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu_flags(vcpu) & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d5780acab6c2..ac7529305717 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)) + if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)) cptr |= CPTR_EL2_TZ; write_sysreg(cptr, cptr_el2); @@ -241,7 +241,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(host_ctxt); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index c2e443202f8e..0113d442bc95 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -153,7 +153,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_restore_host_state_vhe(host_ctxt); - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index b47df73e98d7..867e8856bdcd 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -20,7 +20,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr bool is_aarch32 = vcpu_mode_is_32bit(vcpu); u32 esr = 0; - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1 | KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_PENDING_EXCEPTION); @@ -52,7 +52,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) { u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA64_EL1 | KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_PENDING_EXCEPTION); @@ -73,7 +73,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) static void inject_undef32(struct kvm_vcpu *vcpu) { - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_UND | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_UND | KVM_ARM64_PENDING_EXCEPTION); } @@ -97,13 +97,13 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr) far = vcpu_read_sys_reg(vcpu, FAR_EL1); if (is_pabt) { - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_IABT | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_IABT | KVM_ARM64_PENDING_EXCEPTION); far &= GENMASK(31, 0); far |= (u64)addr << 32; vcpu_write_sys_reg(vcpu, fsr, IFSR32_EL2); } else { /* !iabt */ - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA32_DABT | + vcpu_flags(vcpu) |= (KVM_ARM64_EXCEPT_AA32_DABT | KVM_ARM64_PENDING_EXCEPTION); far &= GENMASK(63, 32); far |= addr; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ab1ef5313a3e..f94b5b07d2cf 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -81,7 +81,7 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * KVM_REG_ARM64_SVE_VLS. Allocation is deferred until * kvm_arm_vcpu_finalize(), which freezes the configuration. */ - vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; + vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_SVE; return 0; } @@ -111,7 +111,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) return -ENOMEM; vcpu->arch.sve_state = buf; - vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; + vcpu_flags(vcpu) |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; } @@ -162,7 +162,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) !system_has_full_ptr_auth()) return -EINVAL; - vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + vcpu_flags(vcpu) |= KVM_ARM64_GUEST_HAS_PTRAUTH; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1a7968ad078c..8fb57e83e9ec 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -348,7 +348,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu, { if (p->is_write) { vcpu_write_sys_reg(vcpu, p->regval, r->reg); - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; } else { p->regval = vcpu_read_sys_reg(vcpu, r->reg); } @@ -381,7 +381,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu, val |= (p->regval & (mask >> shift)) << shift; *dbg_reg = val; - vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu_flags(vcpu) |= KVM_ARM64_DEBUG_DIRTY; } static void dbg_to_reg(struct kvm_vcpu *vcpu, From patchwork Fri Sep 24 12:53:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F846C433FE for ; Fri, 24 Sep 2021 13:23:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 561D76105A for ; Fri, 24 Sep 2021 13:23:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345345AbhIXNZX (ORCPT ); Fri, 24 Sep 2021 09:25:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345221AbhIXNYy (ORCPT ); Fri, 24 Sep 2021 09:24:54 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09CFCC08EAF7 for ; Fri, 24 Sep 2021 05:54:24 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id e6-20020a0cb446000000b0037eeb9851dfso30074639qvf.17 for ; Fri, 24 Sep 2021 05:54:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q3Qv5RrzIkzZKkPoa+t7yuRspw6lMoesSpU7KdXvcDY=; b=eyhYeIi0Yc+OpQ4WMWB6cehoP4mjxRW0OT/gINGnW1NX3nKXLh6PloH1z7YXRhLRXD peezCPobScU708l00p9f5m1T/VHFeVITDCk9WMh1QSAlfuJK+hrFj7DW3hxAuc46fN2Z F0kdWLjOE7agAn61bsssc7wGH5Qh4UWbFVUVFIZSbb7sZlE9xdGK0ta7jvigzphojt91 Jln4/X0wiH2ePbGsLP+kpJpWcswJzPsJS4Gbx3RnsC+83ctbyj+FXU+ucwZ5bY3f8pPw 4jaOO/FpP64PazSutjAhS86CC3QUmA6kgPtAYkIFkwb5jSW4fMZWarD6aA5DcqW/3Niu yYLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q3Qv5RrzIkzZKkPoa+t7yuRspw6lMoesSpU7KdXvcDY=; b=wlc2aR/UMVlClq3KQHLEnPcK5nBMCld/9XC6/FWL9CxRrJqMhyK/nRb0lCJoupa5kK vPFnEWGr7UZqn+oD/Yo/tkSw8eHQ/dMfzOaZ0AjhE8Ukz9pbHTMtdNhwM+TVWX1LJJG/ o6+sJFR9VB3lbWdccCUVS1E0Ezb+PwP6GgQBmTet+V1HmOMaeyyctYD4IT7tr1LyQ13r otnVTGxmxXpY7ly03RgMPBypDJDQnF9sM1X9I56cod+FdbeolGu750+Fczmc8nE5iiWu /2LuQb+bjMef46RdndZ6Gx5KtVIhoCqlbt09M7MVk58eXqfKNjhUR6kdPNTF3Yk+38/s v4dQ== X-Gm-Message-State: AOAM531fGqEssewMEJLGkAgB1Ur7syyeT32uMYpYc+13zo4J8kHuf0Jj T/xDZJsa8Debok1O5Y8m3LyJX1JqaQ== X-Google-Smtp-Source: ABdhPJy8hgrd2LxFyAX4SBH6cSNpiZRm1qTOTY9PtiLzl1pzwOaYr+Q+gCysP0BQDCTLObyPCMlvgepQPg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1430:: with SMTP id o16mr9792074qvx.66.1632488063188; Fri, 24 Sep 2021 05:54:23 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:39 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-11-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 10/30] KVM: arm64: Add accessors for hypervisor state in kvm_vcpu_arch From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Some of the members of vcpu_arch represent state that belongs to the hypervisor. Future patches will factor these out into their own structure. To simplify the refactoring and make it easier to read, add accessors for the members of kvm_vcpu_arch that represent the hypervisor state. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 182 ++++++++++++++++++++++----- arch/arm64/include/asm/kvm_host.h | 38 ++++-- 2 files changed, 181 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 7d09a9356d89..e095afeecd10 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -41,9 +41,14 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); +static __always_inline bool hyp_state_el1_is_32bit(struct vcpu_hyp_state *vcpu_hyps) +{ + return !(hyp_state_hcr_el2(vcpu_hyps) & HCR_RW); +} + static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu_hcr_el2(vcpu) & HCR_RW); + return hyp_state_el1_is_32bit(&hyp_state(vcpu)); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) @@ -252,14 +257,19 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) return mode != PSR_MODE_EL0t; } +static __always_inline u32 kvm_hyp_state_get_esr(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).esr_el2; +} + static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).esr_el2; + return kvm_hyp_state_get_esr(&hyp_state(vcpu)); } -static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_hyp_state_get_condition(const struct vcpu_hyp_state *vcpu_hyps) { - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); if (esr & ESR_ELx_CV) return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT; @@ -267,111 +277,216 @@ static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) return -1; } +static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_get_condition(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_get_hfar(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).far_el2; +} + static __always_inline unsigned long kvm_vcpu_get_hfar(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).far_el2; + return kvm_hyp_state_get_hfar(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_get_fault_ipa(const struct vcpu_hyp_state *vcpu_hyps) +{ + return ((phys_addr_t) hyp_state_fault(vcpu_hyps).hpfar_el2 & HPFAR_MASK) << 8; } static __always_inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu) { - return ((phys_addr_t) vcpu_fault(vcpu).hpfar_el2 & HPFAR_MASK) << 8; + return kvm_hyp_state_get_fault_ipa(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_get_disr(const struct vcpu_hyp_state *vcpu_hyps) +{ + return hyp_state_fault(vcpu_hyps).disr_el1; } static inline u64 kvm_vcpu_get_disr(const struct kvm_vcpu *vcpu) { - return vcpu_fault(vcpu).disr_el1; + return kvm_hyp_state_get_disr(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_get_imm(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_xVC_IMM_MASK; } static inline u32 kvm_vcpu_hvc_get_imm(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_xVC_IMM_MASK; + return kvm_hyp_state_get_imm(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_dabt_isvalid(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_ISV); } static __always_inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_ISV); + return kvm_hyp_state_dabt_isvalid(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_iss_nisv_sanitized(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); } static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); + return kvm_hyp_state_iss_nisv_sanitized(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_issext(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SSE); } static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SSE); + return kvm_hyp_state_issext(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_issf(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SF); } static inline bool kvm_vcpu_dabt_issf(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_SF); + return kvm_hyp_state_issf(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_dabt_get_rd(const struct vcpu_hyp_state *vcpu_hyps) +{ + return (kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; } static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu) { - return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT; + return kvm_hyp_state_dabt_get_rd(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_abt_iss1tw(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_S1PTW); } static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW); + return kvm_hyp_state_abt_iss1tw(&hyp_state(vcpu)); } /* Always check for S1PTW *before* using this. */ +static __always_inline u32 kvm_hyp_state_dabt_iswrite(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_WNR; +} + static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR; + return kvm_hyp_state_dabt_iswrite(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_dabt_is_cm(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_CM); } static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_CM); + return kvm_hyp_state_dabt_is_cm(&hyp_state(vcpu)); +} + +static __always_inline phys_addr_t kvm_hyp_state_dabt_get_as(const struct vcpu_hyp_state *vcpu_hyps) +{ + return 1 << ((kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); } static __always_inline unsigned int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) { - return 1 << ((kvm_vcpu_get_esr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); + return kvm_hyp_state_dabt_get_as(&hyp_state(vcpu)); } /* This one is not specific to Data Abort */ +static __always_inline u32 kvm_hyp_state_trap_il_is32bit(const struct vcpu_hyp_state *vcpu_hyps) +{ + return !!(kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_IL); +} + static __always_inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu) { - return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_IL); + return kvm_hyp_state_trap_il_is32bit(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_class(const struct vcpu_hyp_state *vcpu_hyps) +{ + return ESR_ELx_EC(kvm_hyp_state_get_esr(vcpu_hyps)); } static __always_inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu) { - return ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); + return kvm_hyp_state_trap_get_class(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_is_iabt(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_trap_get_class(vcpu_hyps) == ESR_ELx_EC_IABT_LOW; } static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW; + return kvm_hyp_state_trap_is_iabt(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_is_exec_fault(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_trap_is_iabt(vcpu_hyps) && !kvm_hyp_state_abt_iss1tw(vcpu_hyps); } static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); + return kvm_hyp_state_trap_is_exec_fault(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC; } static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; + return kvm_hyp_state_trap_get_fault(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault_type(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_TYPE; } static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE; + return kvm_hyp_state_trap_get_fault_type(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_trap_get_fault_level(const struct vcpu_hyp_state *vcpu_hyps) +{ + return kvm_hyp_state_get_esr(vcpu_hyps) & ESR_ELx_FSC_LEVEL; } static __always_inline u8 kvm_vcpu_trap_get_fault_level(const struct kvm_vcpu *vcpu) { - return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_LEVEL; + return kvm_hyp_state_trap_get_fault_level(&hyp_state(vcpu)); } -static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) +static __always_inline u32 kvm_hyp_state_abt_issea(const struct vcpu_hyp_state *vcpu_hyps) { - switch (kvm_vcpu_trap_get_fault(vcpu)) { + switch (kvm_hyp_state_trap_get_fault(vcpu_hyps)) { case FSC_SEA: case FSC_SEA_TTW0: case FSC_SEA_TTW1: @@ -388,12 +503,23 @@ static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) } } -static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) +static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_abt_issea(&hyp_state(vcpu)); +} + +static __always_inline u32 kvm_hyp_state_sys_get_rt(const struct vcpu_hyp_state *vcpu_hyps) { - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); return ESR_ELx_SYS64_ISS_RT(esr); } + +static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) +{ + return kvm_hyp_state_sys_get_rt(&hyp_state(vcpu)); +} + static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) { if (kvm_vcpu_abt_iss1tw(vcpu)) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 280ee23dfc5a..3e5c173d2360 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -373,12 +373,21 @@ struct kvm_vcpu_arch { } steal; }; +#define hyp_state(vcpu) ((vcpu)->arch) + +/* Accessors for hyp_state parameters related to the hypervistor state. */ +#define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2 +#define hyp_state_mdcr_el2(hyps) (hyps)->mdcr_el2 +#define hyp_state_vsesr_el2(hyps) (hyps)->vsesr_el2 +#define hyp_state_fault(hyps) (hyps)->fault +#define hyp_state_flags(hyps) (hyps)->flags + /* Accessors for vcpu parameters related to the hypervistor state. */ -#define vcpu_hcr_el2(vcpu) (vcpu)->arch.hcr_el2 -#define vcpu_mdcr_el2(vcpu) (vcpu)->arch.mdcr_el2 -#define vcpu_vsesr_el2(vcpu) (vcpu)->arch.vsesr_el2 -#define vcpu_fault(vcpu) (vcpu)->arch.fault -#define vcpu_flags(vcpu) (vcpu)->arch.flags +#define vcpu_hcr_el2(vcpu) hyp_state_hcr_el2(&hyp_state(vcpu)) +#define vcpu_mdcr_el2(vcpu) hyp_state_mdcr_el2(&hyp_state(vcpu)) +#define vcpu_vsesr_el2(vcpu) hyp_state_vsesr_el2(&hyp_state(vcpu)) +#define vcpu_fault(vcpu) hyp_state_fault(&hyp_state(vcpu)) +#define vcpu_flags(vcpu) hyp_state_flags(&hyp_state(vcpu)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -441,18 +450,22 @@ struct kvm_vcpu_arch { */ #define KVM_ARM64_INCREMENT_PC (1 << 9) /* Increment PC */ -#define vcpu_has_sve(vcpu) (system_supports_sve() && \ - ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) +#define hyp_state_has_sve(hyps) (system_supports_sve() && \ + (hyp_state_flags((hyps)) & KVM_ARM64_GUEST_HAS_SVE)) + +#define vcpu_has_sve(vcpu) hyp_state_has_sve(&hyp_state(vcpu)) #ifdef CONFIG_ARM64_PTR_AUTH -#define vcpu_has_ptrauth(vcpu) \ +#define hyp_state_has_ptrauth(hyps) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) && \ - (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH) + hyp_state_flags(hyps) & KVM_ARM64_GUEST_HAS_PTRAUTH) #else -#define vcpu_has_ptrauth(vcpu) false +#define hyp_state_has_ptrauth(hyps) false #endif +#define vcpu_has_ptrauth(vcpu) hyp_state_has_ptrauth(&hyp_state(vcpu)) + #define vcpu_ctxt(vcpu) ((vcpu)->arch.ctxt) /* VCPU Context accessors (direct) */ @@ -794,8 +807,11 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm) int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); +#define kvm_arm_hyp_state_sve_finalized(hyps) \ + (hyp_state_flags((hyps)) & KVM_ARM64_VCPU_SVE_FINALIZED) + #define kvm_arm_vcpu_sve_finalized(vcpu) \ - ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) + kvm_arm_hyp_state_sve_finalized(&hyp_state(vcpu)) #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) From patchwork Fri Sep 24 12:53:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CB5CC433EF for ; Fri, 24 Sep 2021 13:23:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A4F161050 for ; Fri, 24 Sep 2021 13:23:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345170AbhIXNZ0 (ORCPT ); Fri, 24 Sep 2021 09:25:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345558AbhIXNYy (ORCPT ); Fri, 24 Sep 2021 09:24:54 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BFBBC08EAF9 for ; Fri, 24 Sep 2021 05:54:26 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id u22-20020a05620a431600b0045dcc8487daso1143812qko.6 for ; Fri, 24 Sep 2021 05:54:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LZOV2dq//fdarSZ1UQXRXLqBKRPy36MQCgMnGFxeKiY=; b=i7wNsSyEZO3Xlq4SY8WT2jMhEBq6yM4BseM6UxtVbhob5zKaZA1doTwn0DklaFJS2J LjLo/74OYhBloKd6aumy8koxoHfNV7/DYJzKOeMqQ65NyinAyxqchr5aDy0CYpDfOw3t JM2pDEncfpfat3Mbl+MfvITadVXzbrAUxqM69iij9wjqJ+8GPmmWmw86KHpz6RuUYLq/ bt7kxSeLwbBL2g8F9PT4IrcOvb1n3g5mAneeZ2GG+fnnw4rWmPYIEbRrY2TEHKTtuZcC fCpIu4w6jLfF+Eaqz8z6+bgq8cwl75ERIOks6W689Aq7x5XtM7RrPYZzgBTyOtyEjH7L yyfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LZOV2dq//fdarSZ1UQXRXLqBKRPy36MQCgMnGFxeKiY=; b=dlk4tlPzEe/KfsZWoA+gRLAVI5xGApPYK9WZ2pyZSAiamJr12oEpuQZYnJHVOotvxY aFBlcpwHrFXkvxBm5Jt4WQHssGQ3CCkZLGf+iG2G4cx0G0nyRy91puGQiC+O5kXy3JQQ ++TfSzd3EdbDBKq4RuEloPyFdU2g4qHi8AzKokgTSqx9HNkiO/kRMmmkyJaiHL/AOpi4 RpSRrLCBMM8Fgh4FTfMuqJfeoCWMDNAQ6fvsBU4d74N4lw5hJhYhynHwTia2tm/ZGCxX dffCCywSg/ZcqdEjUP7enATAb4Au56voHFODwfGBqXfDG1pzpKG3GsTnr3evEMfj0bkO I0Eg== X-Gm-Message-State: AOAM533EuP7/lGfF7R9YruIRw/rThUj3zcLcUpI4vvo3jeKXsmP+OM/+ DQH/LGnn6wRPuhzVLNMb9pzpIUIa9A== X-Google-Smtp-Source: ABdhPJyiUxiESUbsfUSqdfuMXJkScSsmcEa94A8nBj+cCsvl5tfCKO45MzFdUPEtgqstBChxPe4XLicMqw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1022:: with SMTP id k2mr102283qvr.53.1632488065246; Fri, 24 Sep 2021 05:54:25 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:40 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-12-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 11/30] KVM: arm64: create and use a new vcpu_hyp_state struct From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create a struct for the hypervisor state from the related fields in vcpu_arch. This is needed in future patches to reduce the scope of functions from the vcpu as a whole to only the relevant state, via this newly created struct. Create a new instance of this struct in vcpu_arch and fix the accessors to use the new fields. Remove the existing fields from vcpu_arch. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++------------- arch/arm64/kernel/asm-offsets.c | 2 +- 2 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3e5c173d2360..dc4b5e133d86 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -269,27 +269,35 @@ struct vcpu_reset_state { bool reset; }; +/* Holds the hyp-relevant data of a vcpu.*/ +struct vcpu_hyp_state { + /* HYP configuration */ + u64 hcr_el2; + u32 mdcr_el2; + + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ + u64 vsesr_el2; + + /* Exception Information */ + struct kvm_vcpu_fault_info fault; + + /* Miscellaneous vcpu state flags */ + u64 flags; +}; + struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; void *sve_state; unsigned int sve_max_vl; + struct vcpu_hyp_state hyp_state; + /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; - /* HYP configuration */ - u64 hcr_el2; - u32 mdcr_el2; - - /* Exception Information */ - struct kvm_vcpu_fault_info fault; - /* State of various workarounds, see kvm_asm.h for bit assignment */ u64 workaround_flags; - /* Miscellaneous vcpu state flags */ - u64 flags; - /* * We maintain more than a single set of debug registers to support * debugging the guest from the host and to maintain separate host and @@ -356,9 +364,6 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; - /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ - u64 vsesr_el2; - /* Additional reset state */ struct vcpu_reset_state reset_state; @@ -373,7 +378,7 @@ struct kvm_vcpu_arch { } steal; }; -#define hyp_state(vcpu) ((vcpu)->arch) +#define hyp_state(vcpu) ((vcpu)->arch.hyp_state) /* Accessors for hyp_state parameters related to the hypervistor state. */ #define hyp_state_hcr_el2(hyps) (hyps)->hcr_el2 @@ -633,7 +638,7 @@ void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); #ifndef __KVM_NVHE_HYPERVISOR__ -#define kvm_call_hyp_nvhe(f, ...) \ +#define kvm_call_hyp_nvhe(f, ...) \ ({ \ struct arm_smccc_res res; \ \ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index c2cc3a2813e6..1776efc3cc9d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -107,7 +107,7 @@ int main(void) BLANK(); #ifdef CONFIG_KVM DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); - DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); + DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1])); From patchwork Fri Sep 24 12:53:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13C1BC433F5 for ; Fri, 24 Sep 2021 13:23:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E851861050 for ; Fri, 24 Sep 2021 13:23:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345371AbhIXNZ1 (ORCPT ); Fri, 24 Sep 2021 09:25:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344780AbhIXNYz (ORCPT ); Fri, 24 Sep 2021 09:24:55 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFC8CC08EAFE for ; Fri, 24 Sep 2021 05:54:28 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id l9-20020adfc789000000b00160111fd4e8so7980656wrg.17 for ; Fri, 24 Sep 2021 05:54:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rk3i6cMxfEloSO2uDCLcRGjjlI1NB6fkfB9BEPfgUdo=; b=G8Y/sw0GQZ9Ra9Rgq2GzZpX2BdysDUEE0gmSOnfZAHAaQlZl7Tm7mOztV63Slfq2ZG x67Hw5t941WzbpjFLbI3pJgRX7+hd0i8LfMFNdodT321JuulQVmnQyTfGFryVwllHmFG 1vXBiGtE8lTNYUQRKRh3ucYwytau/zU07sTgng/OEmVb/yKk6fK32yXmVVNdAMX3nIVY bxDohiBC3hqUkkOoDQ/FlGSkyUPRE37m0TcmDzC1hOCUYHJPS0Hu/iWubM5XjF2G8SLX 0c2eNWMayqjeA2N2yrVkrQADnSUfxxLQaXfW3YkPV+65CLA4XF4jfo9AYqbpw1B5NBX9 ex/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rk3i6cMxfEloSO2uDCLcRGjjlI1NB6fkfB9BEPfgUdo=; b=6tNBNFtktOJ5NzmF6laaeDPfXURLV0oogJ/uqAyQOBgE9OuNKy095IlXZQHAYyyV4s eTUA5lYPJgTLTaoKuWDbasoU2EkkpnncURFbvEpJbrSLW0wz9OABV4ed6ngAFGPfX1Bz q4wrLJVqgzDY1cicYwxEgSTN6+8HeukfPO4rW980iYhRdfpJgY6QKzbREoFzbcQb9tQw PUcHhvKKqAr5/60xTvQXHGWRBCy2u4jKdaeauTT8erXptQuXB2/JW20rua9j6dMHzkwG VhNXONewgHEPEBM7NAOs3ilK/TPG9XxwLcyrnzRj+h2/kp+qgr8R/OSRTg2/gVlMfad2 8pDA== X-Gm-Message-State: AOAM530LnctpnxFOgdNKH5ONcqwS9PKX7dt3f9mHWulxQoMd+BgC55js uwwzX0nDdZw2iWEHa0+rW7MtkCZ9sw== X-Google-Smtp-Source: ABdhPJwEEm2A93CZhB3pUAiv6FWHfC7StwWuqqHbyJROswwWrTmU0101uCP6/VUst3ROn1eX+iAVc70wlQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a5d:58d0:: with SMTP id o16mr11711045wrf.392.1632488067286; Fri, 24 Sep 2021 05:54:27 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:41 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-13-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 12/30] KVM: arm64: COCCI: add_hypstate.cocci use_hypstate.cocci: Reduce scope of functions to hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Many functions don't need access to the vcpu structure, but only the hyp_state. Reduce their scope. This applies the semantic patches with the following commands: FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" spatch --sp-file cocci_refactor/add_hypstate.cocci $FILES --in-place spatch --sp-file cocci_refactor/use_hypstate.cocci $FILES --in-place This patch adds variables that may be unused. These will be removed at the end of this patch series. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/aarch32.c | 2 + arch/arm64/kvm/hyp/exception.c | 19 +++++--- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 2 + arch/arm64/kvm/hyp/include/hyp/switch.h | 54 +++++++++++++--------- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 6 ++- arch/arm64/kvm/hyp/nvhe/switch.c | 21 +++++---- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 1 + arch/arm64/kvm/hyp/vgic-v3-sr.c | 29 ++++++++++++ arch/arm64/kvm/hyp/vhe/switch.c | 25 +++++----- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 4 +- 11 files changed, 112 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 2e2b60a1b6c7..2737e05a16b2 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -94,7 +94,7 @@ void __sve_save_state(void *sve_pffr, u32 *fpsr); void __sve_restore_state(void *sve_pffr, u32 *fpsr); #ifndef __KVM_NVHE_HYPERVISOR__ -void activate_traps_vhe_load(struct kvm_vcpu *vcpu); +void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps); void deactivate_traps_vhe_put(void); #endif diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index 27ebfff023ff..2d45e13d1b12 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -46,6 +46,7 @@ static const unsigned short cc_map[16] = { */ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { + const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; @@ -126,6 +127,7 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) */ void kvm_skip_instr32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 4514e345c26f..d4c2905b595d 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -59,26 +59,31 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { + const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); } static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } @@ -326,9 +331,10 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, static void kvm_inject_exception(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu_el1_is_32bit(vcpu)) { - switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { + switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); break; @@ -343,7 +349,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) break; } } else { - switch (vcpu_flags(vcpu) & KVM_ARM64_EXCEPT_MASK) { + switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case (KVM_ARM64_EXCEPT_AA64_ELx_SYNC | KVM_ARM64_EXCEPT_AA64_EL1): enter_exception64(vcpu_ctxt, PSR_MODE_EL1h, @@ -366,13 +372,14 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) */ void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu_flags(vcpu) & KVM_ARM64_PENDING_EXCEPTION) { + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) { kvm_inject_exception(vcpu); - vcpu_flags(vcpu) &= ~(KVM_ARM64_PENDING_EXCEPTION | + hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); - } else if (vcpu_flags(vcpu) & KVM_ARM64_INCREMENT_PC) { + } else if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) { kvm_skip_instr(vcpu); - vcpu_flags(vcpu) &= ~KVM_ARM64_INCREMENT_PC; + hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC; } } diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 20dde9dbc11b..9bbe452a461a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -15,6 +15,7 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { kvm_skip_instr32(vcpu); @@ -33,6 +34,7 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) */ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 370a8fb827be..5ee8aac86fdc 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -36,6 +36,7 @@ extern struct exception_table_entry __stop___kvm_ex_table; /* Check whether the FP regs were dirtied while in the host-side run loop: */ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); /* * When the system doesn't support FP/SIMD, we cannot rely on * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an @@ -45,15 +46,16 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) */ if (!system_supports_fpsimd() || vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu_flags(vcpu) &= ~(KVM_ARM64_FP_ENABLED | + hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST); - return !!(vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED); + return !!(hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED); } /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -63,6 +65,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point @@ -79,7 +82,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) } } -static inline void __activate_traps_common(struct kvm_vcpu *vcpu) +static inline void __activate_traps_common(struct vcpu_hyp_state *vcpu_hyps) { /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); @@ -94,7 +97,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } - write_sysreg(vcpu_mdcr_el2(vcpu), mdcr_el2); + write_sysreg(hyp_state_mdcr_el2(vcpu_hyps), mdcr_el2); } static inline void __deactivate_traps_common(void) @@ -104,9 +107,9 @@ static inline void __deactivate_traps_common(void) write_sysreg(0, pmuserenr_el0); } -static inline void ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct vcpu_hyp_state *vcpu_hyps) { - u64 hcr = vcpu_hcr_el2(vcpu); + u64 hcr = hyp_state_hcr_el2(vcpu_hyps); if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) hcr |= HCR_TVM; @@ -114,10 +117,10 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg(hcr, hcr_el2); if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu_vsesr_el2(vcpu), SYS_VSESR_EL2); + write_sysreg_s(hyp_state_vsesr_el2(vcpu_hyps), SYS_VSESR_EL2); } -static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) +static inline void ___deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { /* * If we pended a virtual abort, preserve it until it gets @@ -125,9 +128,9 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) * the crucial bit is "On taking a vSError interrupt, * HCR_EL2.VSE is cleared to 0." */ - if (vcpu_hcr_el2(vcpu) & HCR_VSE) { - vcpu_hcr_el2(vcpu) &= ~HCR_VSE; - vcpu_hcr_el2(vcpu) |= read_sysreg(hcr_el2) & HCR_VSE; + if (hyp_state_hcr_el2(vcpu_hyps) & HCR_VSE) { + hyp_state_hcr_el2(vcpu_hyps) &= ~HCR_VSE; + hyp_state_hcr_el2(vcpu_hyps) |= read_sysreg(hcr_el2) & HCR_VSE; } } @@ -191,18 +194,18 @@ static inline bool __get_fault_info(u64 esr, struct kvm_vcpu_fault_info *fault) return true; } -static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __populate_fault_info(struct vcpu_hyp_state *vcpu_hyps) { u8 ec; u64 esr; - esr = vcpu_fault(vcpu).esr_el2; + esr = hyp_state_fault(vcpu_hyps).esr_el2; ec = ESR_ELx_EC(esr); if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) return true; - return __get_fault_info(esr, &vcpu_fault(vcpu)); + return __get_fault_info(esr, &hyp_state_fault(vcpu_hyps)); } static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) @@ -217,6 +220,7 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), @@ -227,6 +231,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) /* Check for an FPSIMD/SVE trap and handle as appropriate */ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); bool sve_guest, sve_host; u8 esr_ec; @@ -236,8 +241,8 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) return false; if (system_supports_sve()) { - sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu_flags(vcpu) & KVM_ARM64_HOST_SVE_IN_USE; + sve_guest = hyp_state_has_sve(vcpu_hyps); + sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE; } else { sve_guest = false; sve_host = false; @@ -268,13 +273,13 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) } isb(); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_HOST) { + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_HOST) { if (sve_host) __hyp_sve_save_host(vcpu); else __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - vcpu_flags(vcpu) &= ~KVM_ARM64_FP_HOST; + hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_FP_HOST; } if (sve_guest) @@ -287,13 +292,14 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, FPEXC32_EL2), fpexc32_el2); - vcpu_flags(vcpu) |= KVM_ARM64_FP_ENABLED; + hyp_state_flags(vcpu_hyps) |= KVM_ARM64_FP_ENABLED; return true; } static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); @@ -303,7 +309,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) * The normal sysreg handling code expects to see the traps, * let's not do anything here. */ - if (vcpu_hcr_el2(vcpu) & HCR_TVM) + if (hyp_state_hcr_el2(vcpu_hyps) & HCR_TVM) return false; switch (sysreg) { @@ -388,11 +394,12 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; - if (!vcpu_has_ptrauth(vcpu) || + if (!hyp_state_has_ptrauth(vcpu_hyps) || !esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu))) return false; @@ -419,9 +426,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) */ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu_fault(vcpu).esr_el2 = read_sysreg_el2(SYS_ESR); + hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); if (ARM_SERROR_PENDING(*exit_code)) { u8 esr_ec = kvm_vcpu_trap_get_class(vcpu); @@ -465,7 +473,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) if (__hyp_handle_ptrauth(vcpu)) goto guest; - if (!__populate_fault_info(vcpu)) + if (!__populate_fault_info(vcpu_hyps)) goto guest; if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index d49985e825cd..7bc8b34b65b2 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -158,6 +158,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -170,12 +171,13 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) ctxt_sys_reg(vcpu_ctxt, DACR32_EL2) = read_sysreg(dacr32_el2); ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2) = read_sysreg(ifsr32_el2); - if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY) ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (!vcpu_el1_is_32bit(vcpu)) return; @@ -188,7 +190,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DACR32_EL2), dacr32_el2); write_sysreg(ctxt_sys_reg(vcpu_ctxt, IFSR32_EL2), ifsr32_el2); - if (has_vhe() || vcpu_flags(vcpu) & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || hyp_state_flags(vcpu_hyps) & KVM_ARM64_DEBUG_DIRTY) write_sysreg(ctxt_sys_reg(vcpu_ctxt, DBGVCR32_EL2), dbgvcr32_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index ac7529305717..d9326085387b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -36,11 +36,12 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; - ___activate_traps(vcpu); - __activate_traps_common(vcpu); + ___activate_traps(vcpu_hyps); + __activate_traps_common(vcpu_hyps); val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; @@ -67,13 +68,12 @@ static void __activate_traps(struct kvm_vcpu *vcpu) } } -static void __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char __kvm_hyp_host_vector[]; u64 mdcr_el2, cptr; - ___deactivate_traps(vcpu); + ___deactivate_traps(vcpu_hyps); mdcr_el2 = read_sysreg(mdcr_el2); @@ -104,7 +104,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED)) + if (hyp_state_has_sve(vcpu_hyps) && (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED)) cptr |= CPTR_EL2_TZ; write_sysreg(cptr, cptr_el2); @@ -170,6 +170,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) /* Switch to the guest for legacy non-VHE systems */ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -236,12 +237,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __timer_disable_traps(); __hyp_vgic_save_state(vcpu); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); @@ -270,15 +271,17 @@ void __noreturn hyp_panic(void) u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct vcpu_hyp_state *vcpu_hyps; struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_hyps = &hyp_state(vcpu); vcpu_ctxt = &vcpu_ctxt(vcpu); if (vcpu) { __timer_disable_traps(); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); __load_host_stage2(); __sysreg_restore_state_nvhe(host_ctxt); } diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 8dbc39026cc5..84304d6d455a 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -36,6 +36,7 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) */ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index bdb03b8e50ab..725b2976e7c2 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -473,6 +473,7 @@ static int __vgic_v3_bpr_min(void) static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 esr = kvm_vcpu_get_esr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -674,6 +675,7 @@ static int __vgic_v3_clear_highest_active_priority(void) static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; @@ -733,6 +735,7 @@ static void __vgic_v3_bump_eoicount(void) static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; @@ -757,6 +760,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; @@ -795,18 +799,21 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -820,6 +827,7 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -833,18 +841,21 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -863,6 +874,7 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -884,6 +896,7 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; @@ -897,6 +910,7 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -909,6 +923,7 @@ static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 0); } @@ -916,48 +931,56 @@ static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 1); } static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 2); } static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_read_apxrn(vcpu, rt, 3); } static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 0); } static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 1); } static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 2); } static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); __vgic_v3_write_apxrn(vcpu, rt, 3); } static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; @@ -978,6 +1001,7 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; @@ -986,6 +1010,7 @@ static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -999,6 +1024,7 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); ctxt_set_reg(vcpu_ctxt, rt, val); @@ -1006,6 +1032,7 @@ static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; @@ -1028,6 +1055,7 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); @@ -1046,6 +1074,7 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 0113d442bc95..c9da0d1c7e72 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -33,10 +33,11 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; - ___activate_traps(vcpu); + ___activate_traps(vcpu_hyps); val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; @@ -54,7 +55,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val |= CPTR_EL2_TAM; if (update_fp_enabled(vcpu)) { - if (vcpu_has_sve(vcpu)) + if (hyp_state_has_sve(vcpu_hyps)) val |= CPACR_EL1_ZEN; } else { val &= ~CPACR_EL1_FPEN; @@ -67,12 +68,11 @@ static void __activate_traps(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__activate_traps); -static void __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); extern char vectors[]; /* kernel exception vectors */ - ___deactivate_traps(vcpu); + ___deactivate_traps(vcpu_hyps); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); @@ -88,10 +88,9 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__deactivate_traps); -void activate_traps_vhe_load(struct kvm_vcpu *vcpu) +void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps) { - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __activate_traps_common(vcpu); + __activate_traps_common(vcpu_hyps); } void deactivate_traps_vhe_put(void) @@ -110,6 +109,7 @@ void deactivate_traps_vhe_put(void) /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -149,11 +149,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_save_guest_state_vhe(guest_ctxt); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); - if (vcpu_flags(vcpu) & KVM_ARM64_FP_ENABLED) + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); @@ -164,6 +164,7 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; @@ -202,13 +203,15 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + struct vcpu_hyp_state *vcpu_hyps; struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; + vcpu_hyps = &hyp_state(vcpu); vcpu_ctxt = &vcpu_ctxt(vcpu); - __deactivate_traps(vcpu); + __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n", diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 37f56b4743d0..1571c144e9b0 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -63,6 +63,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); */ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -82,7 +83,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = true; - activate_traps_vhe_load(vcpu); + activate_traps_vhe_load(vcpu_hyps); } /** @@ -98,6 +99,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B873C433FE for ; Fri, 24 Sep 2021 13:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31E7461050 for ; Fri, 24 Sep 2021 13:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345810AbhIXNZ2 (ORCPT ); Fri, 24 Sep 2021 09:25:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345528AbhIXNY6 (ORCPT ); Fri, 24 Sep 2021 09:24:58 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34107C08EB28 for ; Fri, 24 Sep 2021 05:54:30 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id j4-20020ad454c4000000b0037a900dda7aso30435618qvx.14 for ; Fri, 24 Sep 2021 05:54:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tcAwNt45Vh1nx0SA/+pHzECez1ST6+QmIJi+nsrNrnY=; b=rRJqgmjLCucwh+wufx8j237AwU9kX/U6m3gu4FcHEgCl+7TjZpNNRxFyoVmz2wJBhi SyVaJHfYhbMRpDDAZD4yRAxl3UbZs8T1bKZ4mlI1LrPCKDeS9f+Qj6WH4IP2NuyoO4tT VupZMeTBPVWdBvyzICj7uTz7Rkrth+DuXDdeD4CFu4yB1rKnLlkaOJhA/3ZcWF5DX78d xCXOMpDIn9HAdiF1sHJUrZa1jdz0AKjfBdkfhTKIZVMCcz5n+Gw+nKohd8+YBRfoUFep YuBpZr6s8s/Dsj1iCua9XlGC2xMHtcNAyDby42cvr1ruvD75NDgPLpDA/yxRoaWHy83P GBxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tcAwNt45Vh1nx0SA/+pHzECez1ST6+QmIJi+nsrNrnY=; b=vN1A++C7T0JLSOlV8rfNw7clOF80fg+8AGOxNuI0DZxoLiIuRstoThGB27OquYpzQl i2DJnfn4WqDuvprzP7oAtCozdGYWDhJtpPin3WbtTwWF9Flu4If1ECrIKl+/qTWq94oO TdVRgJ7HtXubdXyXZk2wQMnnza9MUFe+sEZ3IFZrbh5vk+xasAONygxrxuu2v9O+2wG9 7Z20QPIbqLSrrRzFrlh7ZjulX3UQ29997OEumNNZwA1KULuQO3zKB3slfarP7i6F6MDK 8GyI25hbIp8B+PC3WVW1LspgZo5DVjyk942BolL+0LNLImV6rUuB+PzXCXxXSJMHUahc WFUw== X-Gm-Message-State: AOAM5303ZTBXjX0S/Pl0QY6wv0Gv0bVhkIFUmRUaU8UCI2cXtM6RxDVW 5SEvdXVaSK7u6QnJI4uCaXlA00KyaQ== X-Google-Smtp-Source: ABdhPJwH2gVvKjdWYkyW8o/mToo0z94tmPdYf5a/AXeifZH/51OHiESFtVRdGTkNYrDA6ygLN0UnaBeGBg== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:4a8f:: with SMTP id h15mr9910607qvx.2.1632488069370; Fri, 24 Sep 2021 05:54:29 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:42 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-14-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 13/30] KVM: arm64: change function parameters to use kvm_cpu_ctxt and hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org __kvm_skip_instr, kvm_condition_valid, and __kvm_adjust_pc are passed the vcpu when all they need is the context as well as the hypervisor state. Refactor them to use these instead. These functions are called directly or indirectly in future patches from contexts that don't have access to the whole vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 15 ++++++++++----- arch/arm64/kvm/hyp/aarch32.c | 14 +++++--------- arch/arm64/kvm/hyp/exception.c | 19 ++++++++++--------- arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 14 ++++++-------- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 6 +++--- arch/arm64/kvm/hyp/vgic-v3-sr.c | 4 ++-- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- 9 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index e095afeecd10..28fc4781249e 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -33,8 +33,8 @@ enum exception_type { except_type_serror = 0x180, }; -bool kvm_condition_valid32(const struct kvm_vcpu *vcpu); -void kvm_skip_instr32(struct kvm_vcpu *vcpu); +bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps); +void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_vabt(struct kvm_vcpu *vcpu); @@ -162,14 +162,19 @@ static __always_inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) return ctxt_mode_is_32bit(&vcpu_ctxt(vcpu)); } -static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) +static __always_inline bool __kvm_condition_valid(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps) { - if (vcpu_mode_is_32bit(vcpu)) - return kvm_condition_valid32(vcpu); + if (ctxt_mode_is_32bit(vcpu_ctxt)) + return kvm_condition_valid32(vcpu_ctxt, vcpu_hyps); return true; } +static __always_inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) +{ + return __kvm_condition_valid(&vcpu->arch.ctxt, &hyp_state(vcpu)); +} + static inline void ctxt_set_thumb(struct kvm_cpu_context *ctxt) { *ctxt_cpsr(ctxt) |= PSR_AA32_T_BIT; diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index 2d45e13d1b12..2feb2f8d9907 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -44,20 +44,18 @@ static const unsigned short cc_map[16] = { /* * Check if a trapped instruction should have been executed or not. */ -bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) +bool kvm_condition_valid32(const struct kvm_cpu_context *vcpu_ctxt, const struct vcpu_hyp_state *vcpu_hyps) { - const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); unsigned long cpsr; u32 cpsr_cond; int cond; /* Top two bits non-zero? Unconditional. */ - if (kvm_vcpu_get_esr(vcpu) >> 30) + if (kvm_hyp_state_get_esr(vcpu_hyps) >> 30) return true; /* Is condition field valid? */ - cond = kvm_vcpu_get_condition(vcpu); + cond = kvm_hyp_state_get_condition(vcpu_hyps); if (cond == 0xE) return true; @@ -125,15 +123,13 @@ static void kvm_adjust_itstate(struct kvm_cpu_context *vcpu_ctxt) * kvm_skip_instr - skip a trapped instruction and proceed to the next * @vcpu: The vcpu pointer */ -void kvm_skip_instr32(struct kvm_vcpu *vcpu) +void kvm_skip_instr32(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 pc = *ctxt_pc(vcpu_ctxt); bool is_thumb; is_thumb = !!(*ctxt_cpsr(vcpu_ctxt) & PSR_AA32_T_BIT); - if (is_thumb && !kvm_vcpu_trap_il_is32bit(vcpu)) + if (is_thumb && !kvm_hyp_state_trap_il_is32bit(vcpu_hyps)) pc += 2; else pc += 4; diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index d4c2905b595d..a08806efe031 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -329,11 +329,9 @@ static void enter_exception32(struct kvm_cpu_context *vcpu_ctxt, u32 mode, *ctxt_pc(vcpu_ctxt) = vect_offset; } -static void kvm_inject_exception(struct kvm_vcpu *vcpu) +static void kvm_inject_exception(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (vcpu_el1_is_32bit(vcpu)) { + if (hyp_state_el1_is_32bit(vcpu_hyps)) { switch (hyp_state_flags(vcpu_hyps) & KVM_ARM64_EXCEPT_MASK) { case KVM_ARM64_EXCEPT_AA32_UND: enter_exception32(vcpu_ctxt, PSR_AA32_MODE_UND, 4); @@ -370,16 +368,19 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) * Adjust the guest PC (and potentially exception state) depending on * flags provided by the emulation code. */ -void __kvm_adjust_pc(struct kvm_vcpu *vcpu) +void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_PENDING_EXCEPTION) { - kvm_inject_exception(vcpu); + kvm_inject_exception(vcpu_ctxt, vcpu_hyps); hyp_state_flags(vcpu_hyps) &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_EXCEPT_MASK); } else if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_INCREMENT_PC) { - kvm_skip_instr(vcpu); + kvm_skip_instr(vcpu_ctxt, vcpu_hyps); hyp_state_flags(vcpu_hyps) &= ~KVM_ARM64_INCREMENT_PC; } } + +void __kvm_adjust_pc(struct kvm_vcpu *vcpu) +{ + kvm_adjust_pc(&vcpu_ctxt(vcpu), &hyp_state(vcpu)); +} diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index 9bbe452a461a..4e0cfbe635e5 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -13,12 +13,10 @@ #include #include -static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) +static inline void kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { - kvm_skip_instr32(vcpu); + kvm_skip_instr32(vcpu_ctxt, vcpu_hyps); } else { *ctxt_pc(vcpu_ctxt) += 4; *ctxt_cpsr(vcpu_ctxt) &= ~PSR_BTYPE_MASK; @@ -32,14 +30,12 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) * Skip an instruction which has been emulated at hyp while most guest sysregs * are live. */ -static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) +static inline void __kvm_skip_instr(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); *ctxt_pc(vcpu_ctxt) = read_sysreg_el2(SYS_ELR); ctxt_gp_regs(vcpu_ctxt)->pstate = read_sysreg_el2(SYS_SPSR); - kvm_skip_instr(vcpu); + kvm_skip_instr(vcpu_ctxt, vcpu_hyps); write_sysreg_el2(ctxt_gp_regs(vcpu_ctxt)->pstate, SYS_SPSR); write_sysreg_el2(*ctxt_pc(vcpu_ctxt), SYS_ELR); @@ -54,4 +50,6 @@ static inline void kvm_skip_host_instr(void) write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR); } +void kvm_adjust_pc(struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps); + #endif diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 5ee8aac86fdc..075719c07009 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -350,7 +350,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) return false; } - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return true; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d9326085387b..eadbf2ccaf68 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -204,7 +204,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) */ __debug_save_host_buffers_nvhe(vcpu); - __kvm_adjust_pc(vcpu); + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); /* * We must restore the 32-bit state before the sysregs, thanks diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 84304d6d455a..acd0d21394e3 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -55,13 +55,13 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) /* Reject anything but a 32bit access */ if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } /* Not aligned? Don't bother */ if (fault_ipa & 3) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } @@ -85,7 +85,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) ctxt_set_reg(vcpu_ctxt, rd, data); } - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 725b2976e7c2..d025a5830dcc 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -1086,7 +1086,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) esr = kvm_vcpu_get_esr(vcpu); if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } @@ -1198,7 +1198,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) rt = kvm_vcpu_sys_get_rt(vcpu); fn(vcpu, vmcr, rt); - __kvm_skip_instr(vcpu); + __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index c9da0d1c7e72..395274532c20 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -135,7 +135,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __load_guest_stage2(vcpu->arch.hw_mmu); __activate_traps(vcpu); - __kvm_adjust_pc(vcpu); + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); From patchwork Fri Sep 24 12:53:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77010C433F5 for ; Fri, 24 Sep 2021 13:23:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 597256105A for ; Fri, 24 Sep 2021 13:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345808AbhIXNZb (ORCPT ); Fri, 24 Sep 2021 09:25:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346002AbhIXNY6 (ORCPT ); Fri, 24 Sep 2021 09:24:58 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E58C9C08EB2D for ; Fri, 24 Sep 2021 05:54:32 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id k2-20020adfc702000000b0016006b2da9bso7996637wrg.1 for ; Fri, 24 Sep 2021 05:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iSSWi58NWKOpw0/j/jthgGrijZ2kgwmPgBIUtV8ETrY=; b=YPs8fe54kX7jNCksdMFXe0yl0eg5/mV5V/gzTwhmbgLTmlcb/RfkH5tjD8pG9ykQI7 4qHQSge5XK5NRfCYmF9+FR9qVClA0ZxZvGXQrNRTSlpydveHokALYA9t+7sl2sEi4i8K i6IG06/j1uJFNz94nZsXZIM/NqOhAWYJOW+oBT2D1jNBt/D8sjKIUvaXhfzaK63UDkM6 RHekE5GpHYY+vDcJTV32Z70zHzoHPzx/JQD2Px5+LnYr13K0/uaGdyhvCqnCDZs+Ako+ LpJP8KUeCbQodUcsenuY6SKU5OuodaMJefDk6yOTBf0RcF38yZgBqYjk0BGysIbgDT26 pEug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iSSWi58NWKOpw0/j/jthgGrijZ2kgwmPgBIUtV8ETrY=; b=SV9jru+dxwMYhnam9fqTpGCVaLVxwYwJEOw33wMgQFM2Q1LRK1GTPjXknT8hfUH30n tZVOq1596Zw/pIqTj+OdwcsDjZcxgWCqxsUVtelYmhhjdGN/lUz4bgr3J91PYVxO/0kD Ff2mC46rHAuJqWQ9bUDhMC26p1JvKYI8xrhPK8Iw/yo8auq23O9g+Mgh2a2C2nVzFr8v 2zb57Tn4l4x6+NssQ1Qm7GTCBRL1m+cvIKf3oWunBGQpKXX2z63gTniU8zN4iKYo7+Oj n+1CtGEELa8o2LGljkHN3Tlb0u8BfWVOPN4NPwnKTua++6ia/3gbz0HwndqY/soRlZ6z iWKg== X-Gm-Message-State: AOAM531WW5kwFAElitLoO05qFWvw89xhEmnG+Ga6C9FfmIsR2ZNMx9ON z1Fc72Tf0hkJw2GweUv24TWNpF2eew== X-Google-Smtp-Source: ABdhPJwfXwizprVtGpfhH9CMgSvy4oJQcTlOwGhEYh7h/UCWQzWob21fBAkjNrZp9qiNYIRW3sxPzk3w1w== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c405:: with SMTP id k5mr1951606wmi.24.1632488071450; Fri, 24 Sep 2021 05:54:31 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:43 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-15-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 14/30] KVM: arm64: reduce scope of vgic v2 From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vgic v2 interface functions are passed vcpu, when the state that they need is the vgic distributor, as well as the kvm_cpu_context and the recently created vcpu_hyp_state. Reduce the scope of its interface functions to these structs. Pass the vgic distributor to fixup_guest_exit so that it's not dependent on struct kvm for the vgic state. NOTE: this change to fixup_guest_exit is temporary, and will be tidied up in in a subsequent patch in this series. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 16 ++++++---------- arch/arm64/kvm/hyp/vhe/switch.c | 3 ++- 5 files changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 2737e05a16b2..d9a8872a7efb 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -55,7 +55,7 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); */ #define __kvm_swab32(x) ___constant_swab32(x) -int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu); +int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps); void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 075719c07009..30fcfe84f609 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -424,7 +424,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); @@ -486,7 +486,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) !kvm_vcpu_abt_iss1tw(vcpu); if (valid) { - int ret = __vgic_v2_perform_cpuif_access(vcpu); + int ret = __vgic_v2_perform_cpuif_access(vgic, vcpu_ctxt, vcpu_hyps); if (ret == 1) goto guest; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index eadbf2ccaf68..164b0f899f7b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -172,6 +172,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + struct kvm *kvm = kern_hyp_va(vcpu->kvm); + struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -230,7 +232,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) exit_code = __guest_enter(vcpu); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index acd0d21394e3..787f973af43a 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -34,19 +34,15 @@ static bool __is_be(struct kvm_cpu_context *vcpu_ctxt) * 0: Not a GICV access * -1: Illegal GICV access successfully performed */ -int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v2_perform_cpuif_access(struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; phys_addr_t fault_ipa; void __iomem *addr; int rd; /* Build the full address */ - fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); - fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); + fault_ipa = kvm_hyp_state_get_fault_ipa(vcpu_hyps); + fault_ipa |= kvm_hyp_state_get_hfar(vcpu_hyps) & GENMASK(11, 0); /* If not for GICV, move on */ if (fault_ipa < vgic->vgic_cpu_base || @@ -54,7 +50,7 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return 0; /* Reject anything but a 32bit access */ - if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) { + if (kvm_hyp_state_dabt_get_as(vcpu_hyps) != sizeof(u32)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return -1; } @@ -65,11 +61,11 @@ int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) return -1; } - rd = kvm_vcpu_dabt_get_rd(vcpu); + rd = kvm_hyp_state_dabt_get_rd(vcpu_hyps); addr = kvm_vgic_global_state.vcpu_hyp_va; addr += fault_ipa - vgic->vgic_cpu_base; - if (kvm_vcpu_dabt_iswrite(vcpu)) { + if (kvm_hyp_state_dabt_iswrite(vcpu_hyps)) { u32 data = ctxt_get_reg(vcpu_ctxt, rd); if (__is_be(vcpu_ctxt)) { /* guest pre-swabbed data, undo this for writel() */ diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 395274532c20..f315058a50ca 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -111,6 +111,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -145,7 +146,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(vcpu); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); sysreg_save_guest_state_vhe(guest_ctxt); From patchwork Fri Sep 24 12:53:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E3E8C433F5 for ; Fri, 24 Sep 2021 13:24:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BC8B61050 for ; Fri, 24 Sep 2021 13:24:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345870AbhIXNZh (ORCPT ); Fri, 24 Sep 2021 09:25:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346009AbhIXNY6 (ORCPT ); Fri, 24 Sep 2021 09:24:58 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C6E6C08EB30 for ; Fri, 24 Sep 2021 05:54:34 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id e5-20020ac84905000000b002a69dc43859so28868003qtq.10 for ; Fri, 24 Sep 2021 05:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=TexyxHJdUF1BKJ7n5bPF6ww67h6lEEzkA/FZiA0Gdq52o1XP4F9AnYzrMULrZjPCLZ vFKXDbAPKDTbWQQ1UjQrquLyqY8CWeKRdvSPk7EpWN8QlKGB63nXMTwW9QD8Cb3rwAoA D2v4S87wY6lsjeOwQYJK8yV+YDdG2sAlINVWFov6UWfLKHkM3tkFVbyoyWmP2SRCupQa 7P3pwD5TDmIS4etf6pUSwQrsDQvM+HfDNSJ+LFWzcjNrqMqYrWVF/JnKLpC0Olhyc2xr PJ18FhnSNtm7gDIt6gNigyky/YyG7q1zBbVTQV2f+Qapp7lQgVnvbL5WjFFN8sNoHAA8 tdRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=KeI/knEScogZc//w+fSvuhOqQAz66jFkPOwXJ/yv5e9f8w/Vf9IJuVvIy20xn33bAC MowFtdYGs2CBdOIcz87uVKDWa6Ynd01mvKEkIRRdweHjOZmIaRCVETYVjd4jD9EVQ1Ui dv4m/ODaSoG38NuKlGdC8rsMaz83Ht0BCsuNCI6gCWJdMoLX+OsT6kstNPOZ8LH4t1Vk Ie2aD6hx6RrMDgznjsi7hhpPravXBaVrIH+JdyuLSwB20FZHyOPn270C2cQUQikIZZZ1 hwZi/cwVT+bK9Snff5NdLwsxHRjBYQvBU4KeUEC2kLtU/rJrnv7iKv1DKu3Jf5wynHvU yjhQ== X-Gm-Message-State: AOAM53161czfCO+/gxkEaKfo3HmO1S2Bj88jo+iZndRDLAODN+p27qDx RoZaUblsnS8JkFfn+V4zOp/g/ixK8g== X-Google-Smtp-Source: ABdhPJwvUqf3HFnKY+dtVGjbluhQ8kvmArVd8MPsfCPGMinMVDjnOArEBYtnbUYdJdOADNOZ9fnGT01pEw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1243:: with SMTP id q3mr9801960qvv.0.1632488073615; Fri, 24 Sep 2021 05:54:33 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:44 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-16-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vgic v3 interface functions are passed vcpu, when the state that they need is the vgic interface, as well as the kvm_cpu_context and the recently created vcpu_hyp_state. Reduce the scope of its interface functions to these structs. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/vgic-v3-sr.c | 247 ++++++++++++++++++-------------- 1 file changed, 137 insertions(+), 110 deletions(-) diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index d025a5830dcc..3e1951b04fce 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -471,11 +471,10 @@ static int __vgic_v3_bpr_min(void) return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; return crm != 8; @@ -483,10 +482,11 @@ static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, +static int __vgic_v3_highest_priority_lr(struct vgic_v3_cpu_if *cpu_if, + u32 vmcr, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; u8 priority = GICv3_IDLE_PRIORITY; int i, lr = -1; @@ -522,10 +522,10 @@ static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, return lr; } -static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, +static int __vgic_v3_find_active_lr(struct vgic_v3_cpu_if *cpu_if, int intid, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; int i; for (i = 0; i < used_lrs; i++) { @@ -673,17 +673,18 @@ static int __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr < 0) goto spurious; @@ -733,10 +734,11 @@ static void __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; int lr; @@ -749,7 +751,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vid >= VGIC_MIN_LPI) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -758,16 +760,17 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; u8 lr_prio, act_prio; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); /* Drop priority in any case */ act_prio = __vgic_v3_clear_highest_active_priority(); @@ -780,7 +783,7 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vmcr & ICH_VMCR_EOIM_MASK) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -797,24 +800,27 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -825,10 +831,11 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -839,24 +846,27 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -872,10 +882,11 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -894,13 +905,14 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) val = __vgic_v3_read_ap0rn(n); else val = __vgic_v3_read_ap1rn(n); @@ -908,86 +920,94 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) __vgic_v3_write_ap0rn(val, n); else __vgic_v3_write_ap1rn(val, n); } -static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 0); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 1); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 2); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 3); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 0); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 1); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 2); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 3); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr == -1) goto spurious; @@ -999,19 +1019,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; ctxt_set_reg(vcpu_ctxt, rt, vmcr); } -static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); val <<= ICH_VMCR_PMR_SHIFT; @@ -1022,18 +1044,20 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; vtr = read_gicreg(ICH_VTR_EL2); @@ -1053,10 +1077,11 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & ICC_CTLR_EL1_CBPR_MASK) @@ -1074,16 +1099,18 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; - void (*fn)(struct kvm_vcpu *, u32, int); + void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, + struct vcpu_hyp_state *, u32, int); bool is_read; u32 sysreg; - esr = kvm_vcpu_get_esr(vcpu); + esr = kvm_hyp_state_get_esr(vcpu_hyps); if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); @@ -1195,8 +1222,8 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) } vmcr = __vgic_v3_read_vmcr(); - rt = kvm_vcpu_sys_get_rt(vcpu); - fn(vcpu, vmcr, rt); + rt = kvm_hyp_state_sys_get_rt(vcpu_hyps); + fn(cpu_if, vcpu_ctxt, vcpu_hyps, vmcr, rt); __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); From patchwork Fri Sep 24 12:53:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FBA8C433F5 for ; Fri, 24 Sep 2021 13:24:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BF1161050 for ; Fri, 24 Sep 2021 13:24:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346008AbhIXNZd (ORCPT ); Fri, 24 Sep 2021 09:25:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345341AbhIXNY7 (ORCPT ); Fri, 24 Sep 2021 09:24:59 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ABD7C08EB31 for ; Fri, 24 Sep 2021 05:54:36 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id 100-20020aed30ed000000b002a6b3dc6465so22819027qtf.13 for ; Fri, 24 Sep 2021 05:54:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=56Ft6X8W77hY1O7BY33JIx3hfVz1zTOJLBEzRbdZLVw=; b=bNf2MAkSxeUvnC8APurNJOKp8j5WNRVTRWXn3WErUfYnVGPg5v/3n5gcIVXKw4t9W+ ++aRVwRIoyywjT6TVBxQg1NUQyGIG0VzdvUVrPmsmOd0j6qsR/jOMsEc8AodGDClGEYE ilc5+eCrH2jUWhkTldhPyKFHm6/CQrcZGo/No50sHneGSa0fwYmSP74YwjfYULkYD+hn c5Y2MjS7Qyto3oZ343G5CHx9di3IAseWbWfsc1zBK/cjTfGcfNSlwFNI2q+xAnHGH574 EU/1NvgGryTNgZSf9rJArmTNgJZZQh+vL1Qo+AIO/e4ZwxL72jO5CZVB43cxyj7O4E83 ruhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=56Ft6X8W77hY1O7BY33JIx3hfVz1zTOJLBEzRbdZLVw=; b=Rf1BbILvmc5PRnoa8iBehiU4qeB27pdsuEm+MD9JYXP3hG6ZLy7fdE/0Udg9Ik0WY6 5torGRvRrRj7288HRDafL7akY12QE5O2IPEmJhMqvZzpuPZP/0asCxbUt3LFRYBDbrj7 KYFQWfTfsMnxDL859q2vbfN8mMQtPYblO2Ewh1Z3PKXKFRUyMyJBBClHNn17/xmiQFQr VGeSHVDnv1OBFvBeaIo3Wvjb83fvAsTCx3ZBKddo2ZweF0awBHO2DXFVDwyOGKmkmNhN vQvbBVGG/aLr1QM/dpDaLbwtdowwTbTmSjf70GNBWuydvQns9R2QJBmMvil05mip7iat q7iQ== X-Gm-Message-State: AOAM530ClszEEDWYi8HYyhlk0ACaAUrm3vc5mYdtHtHldMFZu9PmYiY4 oClL2SNWcHK2jcNYDDik5cOfI8SBVg== X-Google-Smtp-Source: ABdhPJyUkz++TQP8NYy0o2i3otydUYMEFbi7J6On+R3jI66SeEyVA2YIEWBaY5/7X1wyxFyCNX3yufb4nQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:cb10:: with SMTP id o16mr10045783qvk.57.1632488075438; Fri, 24 Sep 2021 05:54:35 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:45 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-17-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 16/30] KVM: arm64: reduce scope of vgic_v3 access parameters From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the __vgic_v3_perform_cpuif_access only needs vgic_v3_cpu_if, kvm_cpu_context, vcpu_hyps, pass these rather than the whole vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 4 +++- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 9 ++++----- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index d9a8872a7efb..b379c2b96f33 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -63,7 +63,9 @@ void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if); void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); +int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps); #ifdef __KVM_NVHE_HYPERVISOR__ void __timer_enable_traps(void); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 30fcfe84f609..44e76993a9b4 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -502,7 +502,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi if (static_branch_unlikely(&vgic_v3_cpuif_trap) && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { - int ret = __vgic_v3_perform_cpuif_access(vcpu); + int ret = __vgic_v3_perform_cpuif_access(&vcpu->arch.vgic_cpu.vgic_v3, vcpu_ctxt, vcpu_hyps); if (ret == 1) goto guest; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 3e1951b04fce..2c16e0cd45f0 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -1097,11 +1097,10 @@ static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if, write_gicreg(vmcr, ICH_VMCR_EL2); } -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v3_perform_cpuif_access(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; @@ -1112,7 +1111,7 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) esr = kvm_hyp_state_get_esr(vcpu_hyps); if (ctxt_mode_is_32bit(vcpu_ctxt)) { - if (!kvm_condition_valid(vcpu)) { + if (!__kvm_condition_valid(vcpu_ctxt, vcpu_hyps)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); return 1; } From patchwork Fri Sep 24 12:53:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B6D9C433EF for ; Fri, 24 Sep 2021 13:23:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 159236103B for ; Fri, 24 Sep 2021 13:23:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345864AbhIXNZ3 (ORCPT ); Fri, 24 Sep 2021 09:25:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346033AbhIXNY7 (ORCPT ); Fri, 24 Sep 2021 09:24:59 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40D87C08EB32 for ; Fri, 24 Sep 2021 05:54:39 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id a17-20020adfed11000000b00160525e875aso465737wro.23 for ; Fri, 24 Sep 2021 05:54:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KdvACj7ScAwIuEMJbmPRwZN04q8vDtBfcaRexWOega0=; b=Bp+HmPnpsNaDYKd2wJTQKcdK6QUvaLr2dbDESicGuN/CntrraxanjCKThSU6jpa/DF lXU3sJrAsE5wkdJMapqG3vAXGWPGAafdQVDNrEKQB3XoWP4fGSMAajMw9af32GlCD5Ee vbozyCMmUyE2UhxIfhrW79JRFDu5TSDftUpsulHmLrKwy1p7Qvbt20olLYrO1onf0VWp 38ePfEaGMaV4BEhJBoSFZ/mM5AS7AB+OvTbfUeEW6B8XptMp+R+rqnPXo/C8J1esj6Ge YbrGALrWfcIu26o3vHuE9bgXN6VlYcpRb/+eV434kuDpJ8OQL6NyDW9wE8kWoUITIy2A YFnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KdvACj7ScAwIuEMJbmPRwZN04q8vDtBfcaRexWOega0=; b=m+8Yb+928eSMSwdpl5pggH/CV01ME8aHnRmzsbdJ2ItCchNDyPeVal0U0mu8tlEdpd mGFr6rvPj+0tK3B7byIZd5b4M14u3c6SmpxoFUSKe50+iX7f98i2K42RnxPq9Tt35aQo yU9ntfrEFmqMgEUQBh35omHTOk9FZPFY7mLb0xIrAKuFh1tSAPFguv70m4GvvlP0l92P NKfDGiGiodCEGbjZ5KYmIaDJHzPgl3PnfGFneSGwFy/T96ReeZT2r66Yfz8Y35ni5wsZ QbUJdBCLgqkUrsPkDjPU7tEg6tyX+ECW9Qs3FuiEC4GiR+fddFMNSsvdXj4yclK6Fp24 9Pkg== X-Gm-Message-State: AOAM533pgqDvr+hXaioljufagPsISbMWhKzghrpmfkaiXw+Q2ufKss18 n7KBSW/fQdzs8hMWyuOYxMfDYgg1bQ== X-Google-Smtp-Source: ABdhPJyMxkK6kPHtGxtAN5P/8jRsrvR7E205Ok7blDFoTasMAguIS9tU/tg+pmTXLbz5/Qp+Xl4+87FOgA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:1b48:: with SMTP id b69mr1927489wmb.14.1632488077852; Fri, 24 Sep 2021 05:54:37 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:46 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-18-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 17/30] KVM: arm64: access __hyp_running_vcpu via accessors only From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org __hyp_running_vcpu exposes struct vcpu, but all that accesses it only need the cpu_ctxt and the hyp state. Start this refactoring by first ensuring that all accesses to __hyp_running_vcpu go via accessors and not directly. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 24 ++++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 7 +++++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 10 ++++----- arch/arm64/kvm/hyp/vhe/switch.c | 8 +++----- 6 files changed, 41 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5e9b33cbac51..766b6a852407 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -251,6 +251,18 @@ extern u32 __kvm_get_mdcr_el2(void); ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] .endm +.macro get_vcpu_ctxt_ptr vcpu, ctxt + get_host_ctxt \ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_CONTEXT +.endm + +.macro get_vcpu_hyps_ptr vcpu, ctxt + get_host_ctxt \ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_HYPS +.endm + .macro get_loaded_vcpu vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] @@ -261,6 +273,18 @@ extern u32 __kvm_get_mdcr_el2(void); str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] .endm +.macro get_loaded_vcpu_ctxt vcpu, ctxt + adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_CONTEXT +.endm + +.macro get_loaded_vcpu_hyps vcpu, ctxt + adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + add \vcpu, \vcpu, #VCPU_HYPS +.endm + /* * KVM extable for unexpected exceptions. * In the same format _asm_extable, but output to a different section so that diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index dc4b5e133d86..4b01c74705ad 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -230,6 +230,13 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; +#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu +#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu) +#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu + +#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL +#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL + struct kvm_pmu_events { u32 events_host; u32 events_guest; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1776efc3cc9d..1ecc55570acc 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -107,6 +107,7 @@ int main(void) BLANK(); #ifdef CONFIG_KVM DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); + DEFINE(VCPU_HYPS, offsetof(struct kvm_vcpu, arch.hyp_state)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.hyp_state.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs)); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 7bc8b34b65b2..df9cd2177e71 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -80,7 +80,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR); write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR); - } else if (!ctxt->__hyp_running_vcpu) { + } else if (!is_hyp_running_vcpu(ctxt)) { /* * Must only be done for guest registers, hence the context * test. We're coming from the host, so SCTLR.M is already @@ -109,7 +109,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) if (!has_vhe() && cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) && - ctxt->__hyp_running_vcpu) { + is_hyp_running_vcpu(ctxt)) { /* * Must only be done for host registers, hence the context * test. Pairs with nVHE's __deactivate_traps(). diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 164b0f899f7b..12c673301210 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -191,7 +191,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) } host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - host_ctxt->__hyp_running_vcpu = vcpu; + set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); @@ -261,7 +261,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (system_uses_irq_prio_masking()) gic_write_pmr(GIC_PRIO_IRQOFF); - host_ctxt->__hyp_running_vcpu = NULL; + set_hyp_running_vcpu(host_ctxt, NULL); return exit_code; } @@ -274,12 +274,10 @@ void __noreturn hyp_panic(void) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; - struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = host_ctxt->__hyp_running_vcpu; - vcpu_hyps = &hyp_state(vcpu); - vcpu_ctxt = &vcpu_ctxt(vcpu); + vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_hyps = get_hyp_running_hyps(host_ctxt); if (vcpu) { __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index f315058a50ca..14c434e00914 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -117,7 +117,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) u64 exit_code; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - host_ctxt->__hyp_running_vcpu = vcpu; + set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; sysreg_save_host_state_vhe(host_ctxt); @@ -205,12 +205,10 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; - struct kvm_cpu_context *vcpu_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = host_ctxt->__hyp_running_vcpu; - vcpu_hyps = &hyp_state(vcpu); - vcpu_ctxt = &vcpu_ctxt(vcpu); + vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_hyps = get_hyp_running_hyps(host_ctxt); __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); From patchwork Fri Sep 24 12:53:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C169C433EF for ; Fri, 24 Sep 2021 13:24:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 845776103B for ; Fri, 24 Sep 2021 13:24:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345612AbhIXNZf (ORCPT ); Fri, 24 Sep 2021 09:25:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346038AbhIXNY7 (ORCPT ); Fri, 24 Sep 2021 09:24:59 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2114C08EB35 for ; Fri, 24 Sep 2021 05:54:40 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id h18-20020a0cffd2000000b0037e78fb2552so30612113qvv.12 for ; Fri, 24 Sep 2021 05:54:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=i7uodwfDnDtJuiTS9rXdXYViFWYsjMsbxF8IoeW8/W4=; b=qF3Vw4AceYdu1L0PZ66LcU5WQ4Q0vt8paDyr76neaIiU7PKYTbrZLZonVtIrIyRczb DU5SbrytlGhkBB1E+t5EqBtBbVrDhV4CyzU/EB8ANPxuMebR0qF/QjT8hLecRLcdlZgw A3lfSu8T99yQYr4bozntQ9WdVOsOPmEACfCRd0tfsFq6yXyqa5QjOxnbUZyAZx0Kx8ue PsVAFTS6kdE4EA4R/FbZWcnFQc+koZXhZ9EpQ3BLPMpc8IFYdDeyrGBI8gHWC3W4GLB6 P23Dlf8ES+p3OPOy3TiDL1x4KbFHLYAOvn+XGfAEiu3h6LwChGcFLWCRFl12kwRSLS+3 ctBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=i7uodwfDnDtJuiTS9rXdXYViFWYsjMsbxF8IoeW8/W4=; b=LiZIPIR7NIzVCQd1PrSESDZrnsBzHIfC8FcRvdhQ0YfyogHWv7jwdgAzCLh0meMMAT inxQqVFf/mXNeIYxvAbYH+HKmlHZPrPU/ztuhf7uYeMXzBfZnKIgiBU9la/MrxtfrSrr 3wufyg5Rt8hMMTtOQ1wyr21lYST9bUhZ80qH6865D/iwjjCKA39xwPROPUa49sFmHzqg eUdOzxcLx4sjQDDC/BjZ06vZS6S+mY4wzCbANYkJn1FkVJ9bE/GfqdWXJvnDQYlRJ2wS 5c9w5MMI4afZ+Dqus5+MyngOIjJXERijPfrD06OJoaP/cEKRYoaJSfiB98E5bp1XyyW7 jxhA== X-Gm-Message-State: AOAM531v4BWW7aw2gTsucQ+EibSnKIjrphDqyaZBQrprd35cHJ7MiURx VosI2bGyDAGoTJtbZ3JqKwRdajOqWA== X-Google-Smtp-Source: ABdhPJyK06uYrGrDSvLnw7I0yxv76y3TWXx+xyK98z+cTJ0e/eyHwuj1vePAtlEsCveTt5zBbE9Rk8ddCA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:ad4:47a3:: with SMTP id a3mr9927375qvz.31.1632488079831; Fri, 24 Sep 2021 05:54:39 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:47 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-19-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 18/30] KVM: arm64: reduce scope of __guest_exit to only depend on kvm_cpu_context From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org __guest_exit only needs kvm_cpu_context (via the offset VCPU_CONTEXT). Only pass that to it, and fix it to ensure that it only refers to kvm_cpu_context rather than vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/entry.S | 7 ++----- arch/arm64/kvm/hyp/hyp-entry.S | 8 ++++---- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index e831d3dfd50d..996bdc9555da 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -99,15 +99,12 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) adr_l x1, hyp_panic str x1, [x0, #CPU_XREG_OFFSET(30)] - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // x0: return code - // x1: vcpu + // x1: ctxt // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack - - add x1, x1, #VCPU_CONTEXT ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 5f49df4ffdd8..704b3388c86a 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -71,17 +71,17 @@ wa_epilogue: sb el1_trap: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_TRAP b __guest_exit el1_irq: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_IRQ b __guest_exit el1_error: - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_EL1_SERROR b __guest_exit @@ -100,7 +100,7 @@ el2_sync: 1: /* Let's attempt a recovery from the illegal exception return */ - get_vcpu_ptr x1, x0 + get_vcpu_ctxt_ptr x1, x0 mov x0, #ARM_EXCEPTION_IL b __guest_exit From patchwork Fri Sep 24 12:53:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 572E2C433FE for ; Fri, 24 Sep 2021 13:24:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FAD36103B for ; Fri, 24 Sep 2021 13:24:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345229AbhIXNZj (ORCPT ); Fri, 24 Sep 2021 09:25:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345971AbhIXNZA (ORCPT ); Fri, 24 Sep 2021 09:25:00 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 550D4C061796 for ; Fri, 24 Sep 2021 05:54:43 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id v15-20020adff68f000000b0015df51efa18so7963344wrp.16 for ; Fri, 24 Sep 2021 05:54:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cwm3ilFA18I4Raa0wHdWLMhcRNJhs4J6mMJre+L2FYc=; b=je4bpz+fyu10bGVoeoi6Ty4HR9PtcbFNACuFRPupPpY0oUsa9yZ5kNeaIDZcv5QwY6 CSWs6Xj0h0oAHbTrPGjEnrc19tjy+775oHQ7l2R+CvM0oc6tMMC6N5r96PgmOx00xdqu /v1NV17MqFCcglC9pHP1Qbu34U8sSzUZjeotVPNeCTqztX641tRp+ekL3lADIVhvVZgi oJRGASUfqkRJNbKb/8Affuh+LcyRMJm2VOkoMhL747e5fqGybD2S3VnDAbcslVNjbnLL V/H05WEymc21vdiftZeiBopLIXqyb5J/DJcRcrST4LHoIikUl1cR/NrRfsAcvyRUluMG Flyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cwm3ilFA18I4Raa0wHdWLMhcRNJhs4J6mMJre+L2FYc=; b=yRG4MYugSyIkc9gyP0S2e6/caNnCUaZhvXY/5FggnHd3b5x8uNA1B4eLM60rVnysCg W7Qn/2Jv6Xdsb90NaiPFpn518tTAH28T6Yq48HVD6xpbIXeEH6J2WpEFiLwDkl0um+3b rveT+HMVw+VQ8Jw/Wb6obmeAZ/4X0N1UGmdJn1QqDZSIKB+NsWR1vD4QASSBAYkasTi2 zlKhuNJw9dRKztxilL0eghhVjTsC+x27Xxpny/zcNy+5kH4Pu5fHyhFUug9KDPv0SX43 f+8uordkPDYLcEu8ttekXfPAc07xkqvInNEHDTFMGx2iytrKySlQ3o9As8k0XszztoxM yLyA== X-Gm-Message-State: AOAM531uftsgNr/knD2otxXOdMv3XfOdHU/Y67FGsD4LDQ0pQdwOSkjP VkK5vJ7nDT0FX7/yb/xAH4xs2DlHoQ== X-Google-Smtp-Source: ABdhPJxkPO4x4XuJuQJSrvaVgGRJ8ovLrP7eMX40+ktCtEBAvOSg0HVW16gwA36lCjJ3bdNlORdKrvWRmA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c052:: with SMTP id u18mr1931373wmc.105.1632488081942; Fri, 24 Sep 2021 05:54:41 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:48 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-20-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 19/30] KVM: arm64: change calls of get_loaded_vcpu to get_loaded_vcpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org get_loaded_vcpu is used only as a NULL check. get_loaded_vcpu_ctxt fills the same role and reduces the scope. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/entry.S | 4 ++-- arch/arm64/kvm/hyp/nvhe/host.S | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 996bdc9555da..1804be5b7ead 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -81,10 +81,10 @@ alternative_else_nop_endif SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL) // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack + // vcpu ctxt x0-x1 on the stack // If the hyp context is loaded, go straight to hyp_panic - get_loaded_vcpu x0, x1 + get_loaded_vcpu_ctxt x0, x1 cbnz x0, 1f b hyp_panic diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 2b23400e0fb3..7de2e8716f69 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -134,7 +134,7 @@ SYM_FUNC_END(__hyp_do_panic) .align 7 /* If a guest is loaded, panic out of it. */ stp x0, x1, [sp, #-16]! - get_loaded_vcpu x0, x1 + get_loaded_vcpu_ctxt x0, x1 cbnz x0, __guest_exit_panic add sp, sp, #16 From patchwork Fri Sep 24 12:53:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3810C433F5 for ; Fri, 24 Sep 2021 13:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DE456105A for ; Fri, 24 Sep 2021 13:24:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345113AbhIXNZl (ORCPT ); Fri, 24 Sep 2021 09:25:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345287AbhIXNZA (ORCPT ); Fri, 24 Sep 2021 09:25:00 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54674C08EB3F for ; Fri, 24 Sep 2021 05:54:45 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id h5-20020a5d6885000000b0015e21e37523so7980359wru.10 for ; Fri, 24 Sep 2021 05:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xhHjvocneo3OmYPl2aUhxqJ3BMqVUjdViv8A/KXRtOc=; b=J/C7Ta1yUem3gYRbil/zKRF0IxStHUFvTVNEvuRRLK6gctPS7/SC8wGX+7hdV8c7WF sjSMDuG0NjrV+/3ZNT0KciOwQfFrUiKwdKHNGGAYU85PPOeEbgyIDakYmiziwKsXibQN ZPrs0iDshzZjAJZ1kGyfQ1P565o2m8tAaHk2VwfnaXwwQCX46G347RPHq4hD4gmQ2LzT SLor++gvWIZW6zyDo3nnux9Tf+Km2hcZI0xela+/ti30GaLVylKlApErL4/QNfcgmrZL +j6MVi7XB5CJE/FaDfjRSKxRzHxJNULy6UVDbKmpOpKkV6+Jjj8ul6m6gZRAuZ+d1A50 4JLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xhHjvocneo3OmYPl2aUhxqJ3BMqVUjdViv8A/KXRtOc=; b=quJROKQSUFs8WDP2464uczfwwxF3qM7JUbp0oFaWr7D6QQmGnIEVF95R8gwikSEE2Z mhDN16iPc3t2my3U5EU2CM/7C7YgoTvIigl6JZdIIsF1lgEuo75N4L4T66d6GNpPmqga NEnbLJByylfvWJFTP2UsKZpL6Xtgz8DhQJO+AMwbAMuw4fK2B8Aod28EyhtSJzDYVExv RFCMQrrhIC6fPDD06fQcHthsrUMiXhXH+KlEoyDMKKl1alwYeHrmGKWcYqJsQWBN4Jy1 dhX5a9Hu+zVDk/2bo9SWf/CA7GrbafB2nNeZuENlSKrl3q13fEIJ3l2WcrpjTgAGo2RM 9Grg== X-Gm-Message-State: AOAM530jnQnIYkj9m7N2lDgpwoOm03GFGr3a4UihIAG7W7IDrwD8UTHL UdB+EmndUASiQbefaLKNS9L1ZBL5mA== X-Google-Smtp-Source: ABdhPJxU6dLnsCPB/2TUUhu7b+8Tafoe11ACEILnegBEKHhcSyod6McampCuDXPyU43B2yIY76HX2EhGMA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6000:1a89:: with SMTP id f9mr11219965wry.19.1632488083973; Fri, 24 Sep 2021 05:54:43 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:49 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-21-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 20/30] KVM: arm64: add __hyp_running_ctxt and __hyp_running_hyps From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to prepare to remove __hyp_running_vcpu, add __hyp_running_ctxt and __hyp_running_hyps to access the running kvm_cpu_ctxt and the hyp_state, as well as their associated assembly offsets. These new fields are updated but not accessed yet. Their state is consistent with __hyp_running_vcpu. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 13 +++++++++++++ arch/arm64/include/asm/kvm_host.h | 19 ++++++++++++++++--- arch/arm64/kernel/asm-offsets.c | 2 ++ arch/arm64/kvm/hyp/entry.S | 2 +- 4 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 766b6a852407..52079e937fcd 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -271,6 +271,19 @@ extern u32 __kvm_get_mdcr_el2(void); .macro set_loaded_vcpu vcpu, ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + + add \tmp, \vcpu, #VCPU_CONTEXT + str \tmp, [\ctxt, #HOST_CONTEXT_CTXT] + + add \tmp, \vcpu, #VCPU_HYPS + str \tmp, [\ctxt, #HOST_CONTEXT_HYPS] +.endm + +.macro clear_loaded_vcpu ctxt, tmp + adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp + str xzr, [\ctxt, #HOST_CONTEXT_VCPU] + str xzr, [\ctxt, #HOST_CONTEXT_CTXT] + str xzr, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro get_loaded_vcpu_ctxt vcpu, ctxt diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4b01c74705ad..b42d0c6c8004 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -228,14 +228,27 @@ struct kvm_cpu_context { u64 sys_regs[NR_SYS_REGS]; struct kvm_vcpu *__hyp_running_vcpu; + struct kvm_cpu_context *__hyp_running_ctxt; + struct vcpu_hyp_state *__hyp_running_hyps; }; #define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu -#define set_hyp_running_vcpu(ctxt, vcpu) (ctxt)->__hyp_running_vcpu = (vcpu) +#define set_hyp_running_vcpu(host_ctxt, vcpu) do { \ + struct kvm_vcpu *v = (vcpu); \ + (host_ctxt)->__hyp_running_vcpu = v; \ + if (vcpu) { \ + (host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \ + (host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \ + } else { \ + (host_ctxt)->__hyp_running_ctxt = NULL; \ + (host_ctxt)->__hyp_running_hyps = NULL; \ + }\ +} while(0) + #define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu -#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.ctxt : NULL -#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_vcpu ? &(host_ctxt)->__hyp_running_vcpu->arch.hyp_state : NULL +#define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt +#define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps struct kvm_pmu_events { u32 events_host; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1ecc55570acc..9c25078da294 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -117,6 +117,8 @@ int main(void) DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1])); DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); + DEFINE(HOST_CONTEXT_CTXT, offsetof(struct kvm_cpu_context, __hyp_running_ctxt)); + DEFINE(HOST_CONTEXT_HYPS, offsetof(struct kvm_cpu_context, __hyp_running_hyps)); DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); DEFINE(NVHE_INIT_MAIR_EL2, offsetof(struct kvm_nvhe_init_params, mair_el2)); DEFINE(NVHE_INIT_TCR_EL2, offsetof(struct kvm_nvhe_init_params, tcr_el2)); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 1804be5b7ead..8e7033aa5770 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -145,7 +145,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // Now restore the hyp regs restore_callee_saved_regs x2 - set_loaded_vcpu xzr, x2, x3 + clear_loaded_vcpu x2, x3 alternative_if ARM64_HAS_RAS_EXTN // If we have the RAS extensions we can consume a pending error From patchwork Fri Sep 24 12:53:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A414C433EF for ; Fri, 24 Sep 2021 13:24:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7489261050 for ; Fri, 24 Sep 2021 13:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345152AbhIXNZo (ORCPT ); Fri, 24 Sep 2021 09:25:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345154AbhIXNZB (ORCPT ); Fri, 24 Sep 2021 09:25:01 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6297CC034017 for ; Fri, 24 Sep 2021 05:54:47 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id z2-20020a5d4c82000000b0015b140e0562so8012406wrs.7 for ; Fri, 24 Sep 2021 05:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q+8n29A2U0S8NhAXycyXeyNJmHo/u/5CJ1rE8AvB4D8=; b=cxpNbynNFzYUxpl57MPUhLzZUKq20M6Q1Tq3Uvg3l5O/+YFgKxm+t3GqpLYgANuefD FPUjOhEFGEJy43xXVn6ZFNY0LFs4Uy6mjCyxHhS8Zo/wb78rOmhuQQJ9vwDTZv8/11dw DXLgiQ4WjLw2W/KPOPTTtiK/9s1sIFiRPMuNAefBb6r1alfL0vFOz1pUq+Crz1+Tc1Wf FYB+yadmLGksdNEeGyE+zllj0q2xhraWVXxhQ1ab7nRJfm8KrZox2fEDFaKyRLw+x4w4 r49qzs8672CUxLgy2rljpqGDb7uuyXtfH+mlUs38aHU7pwRKXgh3dZ02yKB2BTwkyoQ8 Ln1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q+8n29A2U0S8NhAXycyXeyNJmHo/u/5CJ1rE8AvB4D8=; b=7Sj+glteifGrutKSl4rNmsxvGF8Fh+4IOeW3DTs2J5BqyLKpQ8xspEsNON6yxXTLB2 EfvWOb1in6xTH3IXrlAD6wXuUgw9Hnnp7Pjj3OFzxatZ7arbiR/K4LRywLEiGlytmOFO QjQXYN9sUjkr5oOLxNHt40RY1Cz24VQGulaLP5sJjwZHg+gFYMLMcTxcOEb3U3H9vSJX b7IQuHNJz91xSVFl3hoOGKUowomd1DM4YjhuwuJdzd9NS4cnjrE4js/zmM3o2n42RvYb 8CyQcopXAgYNy+fHXrrGjKqpLUQGy+0WqNV8QxelAbv3ZeZJGfVLCUIH4Z979AjK9WKP c0gw== X-Gm-Message-State: AOAM531k8L86LBIQwhX+NcLbKoBACk6ZoTYMOmxyHCCvK1OkBHkz+RR4 Hv1z/VxtZj18mMqLQQAcFkJU92FhOw== X-Google-Smtp-Source: ABdhPJwSaHPVCm7TSNiTJZE78dylDVwwxbZ3U77YPIEY25dKbKZj9TIDo4HPPu5CDTfj+SDFth6yZB/QeA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:adf:ecd2:: with SMTP id s18mr11112114wro.99.1632488085955; Fri, 24 Sep 2021 05:54:45 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:50 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-22-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 21/30] KVM: arm64: transition code to __hyp_running_ctxt and __hyp_running_hyps From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Transition code for to use the new hyp_running pointers. Everything is consistent, because all fields are in-sync. Remove __hyp_running_vcpu now that no one is using it. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 24 ++++-------------------- arch/arm64/include/asm/kvm_host.h | 5 +---- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kvm/handle_exit.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/host.S | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +--- arch/arm64/kvm/hyp/vhe/switch.c | 8 ++++---- 7 files changed, 14 insertions(+), 36 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 52079e937fcd..e24ebcf9e0d3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -246,31 +246,18 @@ extern u32 __kvm_get_mdcr_el2(void); add \reg, \reg, #HOST_DATA_CONTEXT .endm -.macro get_vcpu_ptr vcpu, ctxt - get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] -.endm - .macro get_vcpu_ctxt_ptr vcpu, ctxt get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_CONTEXT + ldr \vcpu, [\ctxt, #HOST_CONTEXT_CTXT] .endm .macro get_vcpu_hyps_ptr vcpu, ctxt get_host_ctxt \ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_HYPS -.endm - -.macro get_loaded_vcpu vcpu, ctxt - adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + ldr \vcpu, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro set_loaded_vcpu vcpu, ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp - str \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] add \tmp, \vcpu, #VCPU_CONTEXT str \tmp, [\ctxt, #HOST_CONTEXT_CTXT] @@ -281,21 +268,18 @@ extern u32 __kvm_get_mdcr_el2(void); .macro clear_loaded_vcpu ctxt, tmp adr_this_cpu \ctxt, kvm_hyp_ctxt, \tmp - str xzr, [\ctxt, #HOST_CONTEXT_VCPU] str xzr, [\ctxt, #HOST_CONTEXT_CTXT] str xzr, [\ctxt, #HOST_CONTEXT_HYPS] .endm .macro get_loaded_vcpu_ctxt vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_CONTEXT + ldr \vcpu, [\ctxt, #HOST_CONTEXT_CTXT] .endm .macro get_loaded_vcpu_hyps vcpu, ctxt adr_this_cpu \ctxt, kvm_hyp_ctxt, \vcpu - ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] - add \vcpu, \vcpu, #VCPU_HYPS + ldr \vcpu, [\ctxt, #HOST_CONTEXT_HYPS] .endm /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b42d0c6c8004..035ca5a49166 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -227,15 +227,12 @@ struct kvm_cpu_context { u64 sys_regs[NR_SYS_REGS]; - struct kvm_vcpu *__hyp_running_vcpu; struct kvm_cpu_context *__hyp_running_ctxt; struct vcpu_hyp_state *__hyp_running_hyps; }; -#define get_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu #define set_hyp_running_vcpu(host_ctxt, vcpu) do { \ struct kvm_vcpu *v = (vcpu); \ - (host_ctxt)->__hyp_running_vcpu = v; \ if (vcpu) { \ (host_ctxt)->__hyp_running_ctxt = &v->arch.ctxt; \ (host_ctxt)->__hyp_running_hyps = &v->arch.hyp_state; \ @@ -245,7 +242,7 @@ struct kvm_cpu_context { }\ } while(0) -#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_vcpu +#define is_hyp_running_vcpu(ctxt) (ctxt)->__hyp_running_ctxt #define get_hyp_running_ctxt(host_ctxt) (host_ctxt)->__hyp_running_ctxt #define get_hyp_running_hyps(host_ctxt) (host_ctxt)->__hyp_running_hyps diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9c25078da294..f42aea730cf4 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -116,7 +116,6 @@ int main(void) DEFINE(CPU_APDAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1])); DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1])); DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1])); - DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); DEFINE(HOST_CONTEXT_CTXT, offsetof(struct kvm_cpu_context, __hyp_running_ctxt)); DEFINE(HOST_CONTEXT_HYPS, offsetof(struct kvm_cpu_context, __hyp_running_hyps)); DEFINE(HOST_DATA_CONTEXT, offsetof(struct kvm_host_data, host_ctxt)); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 22e9f03fe901..cb6a25b79e38 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -293,7 +293,7 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index) } void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr, - u64 par, uintptr_t vcpu, + u64 par, uintptr_t vcpu_ctxt, u64 far, u64 hpfar) { u64 elr_in_kimg = __phys_to_kimg(__hyp_pa(elr)); u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr; @@ -333,6 +333,6 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr, */ kvm_err("Hyp Offset: 0x%llx\n", hyp_offset); - panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n", - spsr, elr, esr, far, hpfar, par, vcpu); + panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%016lx\n", + spsr, elr, esr, far, hpfar, par, vcpu_ctxt); } diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 7de2e8716f69..975cf125d54c 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -87,7 +87,7 @@ SYM_FUNC_START(__hyp_do_panic) /* Load the panic arguments into x0-7 */ mrs x0, esr_el2 - get_vcpu_ptr x4, x5 + get_vcpu_ctxt_ptr x4, x5 mrs x5, far_el2 mrs x6, hpfar_el2 mov x7, xzr // Unused argument diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 12c673301210..483df8fe052e 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -272,14 +272,12 @@ void __noreturn hyp_panic(void) u64 elr = read_sysreg_el2(SYS_ELR); u64 par = read_sysreg_par(); struct kvm_cpu_context *host_ctxt; - struct kvm_vcpu *vcpu; struct vcpu_hyp_state *vcpu_hyps; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = get_hyp_running_vcpu(host_ctxt); vcpu_hyps = get_hyp_running_hyps(host_ctxt); - if (vcpu) { + if (vcpu_hyps) { __timer_disable_traps(); __deactivate_traps(vcpu_hyps); __load_host_stage2(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 14c434e00914..64de9f0d7636 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -203,20 +203,20 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) static void __hyp_call_panic(u64 spsr, u64 elr, u64 par) { struct kvm_cpu_context *host_ctxt; - struct kvm_vcpu *vcpu; + struct kvm_cpu_context *vcpu_ctxt; struct vcpu_hyp_state *vcpu_hyps; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - vcpu = get_hyp_running_vcpu(host_ctxt); + vcpu_ctxt = get_hyp_running_ctxt(host_ctxt); vcpu_hyps = get_hyp_running_hyps(host_ctxt); __deactivate_traps(vcpu_hyps); sysreg_restore_host_state_vhe(host_ctxt); - panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n", + panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU_CTXT:%p\n", spsr, elr, read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR), - read_sysreg(hpfar_el2), par, vcpu); + read_sysreg(hpfar_el2), par, vcpu_ctxt); } NOKPROBE_SYMBOL(__hyp_call_panic); From patchwork Fri Sep 24 12:53:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82968C433EF for ; Fri, 24 Sep 2021 13:24:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67D1F61076 for ; Fri, 24 Sep 2021 13:24:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345224AbhIXNZq (ORCPT ); Fri, 24 Sep 2021 09:25:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345895AbhIXNZB (ORCPT ); Fri, 24 Sep 2021 09:25:01 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 518B2C03401A for ; Fri, 24 Sep 2021 05:54:49 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id m18-20020adfe952000000b0015b0aa32fd6so7987346wrn.12 for ; Fri, 24 Sep 2021 05:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8jEuEzlbcfPCVXhyDainIX5VC+9wUmsaXxH+UUkLWws=; b=aCPwNQiwMH+K/jPh8d1XnhlEyPYKLFzJsCHQMTxy9s76o9j/JzRfStpJOLN3cyALso 71rB9qEsFuW8yLwe3l7ZVNrQHiU4+FZctlLrDGu2EzVLM8uH1Jn6lSMCSPydgx25l1qz ZrhgopWhzr0lIfWQEsKBnB8GIwhByYUOd+Fe1L9voSa0eiIszncBKzGRrIX484+G2qy9 OiCZsLo27TmIn/vHORUiQbNBsOI1I5YdimAckjJQPa5+AZsKiLiwn3N9iCDNw/yoHoiZ mWntjHdWNY3zW4wj7uEK3gz6ifIk/kUymefHxkNkIA1EmTjAgmzS222TFyT0yetEQ4aZ l3QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8jEuEzlbcfPCVXhyDainIX5VC+9wUmsaXxH+UUkLWws=; b=QkmhZmqznfTGjgvKs3c20+RX+uS8wgaMPv46ijjFS3POyLga6z675ELKWNkrT7HUz+ 3Qrmk6r93cPUsI86N5lAz3jP7yBHHd5qV5GCtkluml9g8ETqvN+w+x0CdyVjkPRBAaug oGRxRKw14L6Gm0xCAaSzMxs8+Au7yDkUGpaRyAJBJSc53yQWiyD0dh+gTnMRFPFGCkaM Mq+vxAll9jSzbU4q9CKl4Rob/OwP2fMZSnMLnGwtl30i/J2pfpGJU5a8jFSiZ62N9wmw MpVhKWNWCArWESUGkIAvyna7EYk2pLl28PDMwaEhNykWsyW+dDj/m6wemzmM7LvnQFp/ Kwnw== X-Gm-Message-State: AOAM531qEdRNGuhYWkXgMzSNG2d/2B8uLe0dvc/Xdzanus2VEmV4LS0h 2xa8D5o9V+adwMb4qxPdVMS/Vo/yiQ== X-Google-Smtp-Source: ABdhPJxUJbh/AE6W9VRAVaudBBE60oECmmozdjammP+VVHJRusaXJiHlbC7xhr44BgjTe/fYyj9xbkoZcQ== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a7b:c048:: with SMTP id u8mr1874492wmc.113.1632488087907; Fri, 24 Sep 2021 05:54:47 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:51 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-23-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 22/30] KVM: arm64: reduce scope of __guest_enter to depend only on kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org guest_enter doesn't need the vcpu, only the guest's kvm_cpu_ctxt. Reduce its scope to that. With this commit, the only state in struct vcpu that the hypervisor needs to save locally in future patches is guest context (kvm_cpu_context) and the hypervisor state (vcpu_hyp_state). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/entry.S | 10 ++++------ arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++++- arch/arm64/kvm/hyp/vhe/switch.c | 5 ++++- 4 files changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index b379c2b96f33..c5206e958136 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -100,7 +100,7 @@ void activate_traps_vhe_load(struct vcpu_hyp_state *vcpu_hyps); void deactivate_traps_vhe_put(void); #endif -u64 __guest_enter(struct kvm_vcpu *vcpu); +u64 __guest_enter(struct kvm_cpu_context *guest_ctxt); bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 8e7033aa5770..f553f184e402 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -18,12 +18,12 @@ .text /* - * u64 __guest_enter(struct kvm_vcpu *vcpu); + * u64 __guest_enter(struct kvm_cpu_context *guest_ctxt); */ SYM_FUNC_START(__guest_enter) - // x0: vcpu + // x0: guest context (input parameter) // x1-x17: clobbered by macros - // x29: guest context + // x29: guest context (maintained for call duration) adr_this_cpu x1, kvm_hyp_ctxt, x2 @@ -47,9 +47,7 @@ alternative_else_nop_endif ret 1: - set_loaded_vcpu x0, x1, x2 - - add x29, x0, #VCPU_CONTEXT + mov x29, x0 // Macro ptrauth_switch_to_guest format: // ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 483df8fe052e..d9a69e66158c 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -228,8 +228,11 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __debug_switch_to_guest(vcpu); do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + /* Jump in the fire! */ - exit_code = __guest_enter(vcpu); + exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, vgic, &exit_code)); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 64de9f0d7636..5039910a7c80 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -142,8 +142,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __debug_switch_to_guest(vcpu); do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + /* Jump in the fire! */ - exit_code = __guest_enter(vcpu); + exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, vgic, &exit_code)); From patchwork Fri Sep 24 12:53:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64478C433F5 for ; Fri, 24 Sep 2021 13:24:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DA3361076 for ; Fri, 24 Sep 2021 13:24:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345963AbhIXNZp (ORCPT ); Fri, 24 Sep 2021 09:25:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346079AbhIXNZB (ORCPT ); Fri, 24 Sep 2021 09:25:01 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7A43C0612AB for ; Fri, 24 Sep 2021 05:54:50 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id e8-20020a0cf348000000b0037a350958f2so30929693qvm.7 for ; Fri, 24 Sep 2021 05:54:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ah9gXDd6fJvOzz9kx2O2leuWRmK2UF+FaL+3WFxvTJ8=; b=VWQcPIt1vlcNarP43ow3DnD94RzA9qM31BLx/7ufU7rLlWKPLscbJUa1hwytIvKECH KyGv1/cyYZsNUp9XXYgMuaT4w7wRqyZE7Xvr9gKadfTjAGm2B3nnm29Lrhl4Hc93kE0D +ntHaq8stYGqAO/6okY/E+mBNRenkNvqh244JfNr3K1Rc2gURhFJjsUO3h5RHXIHv1b4 8rkalsaPpIIU8Yd5QdW9R93ZSOemAGnVlhMTDGw+ZPnykoTrW8jBQ4FokcJo3BvXf2xL hJEk/ZFdtu2bW35QN2sNABKsx62NpxxXmJsrGk8eO6dl3UOR2lzX34pgJUTeLH59Smjp Lr4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ah9gXDd6fJvOzz9kx2O2leuWRmK2UF+FaL+3WFxvTJ8=; b=aCwgpau1gQr2y3rexaV0OX/JGIGj0+nyUXiGHivIFrMgdxQ99aYkuf7rzNOFWF3uPr o6V4y+WMBzZTiwvRHf4HFQi5yZCWWMU7qFzcMQayTLwzWsYR68EpqaScLdy1zRoHaS+T Vj/IM1xS/3cUx7mPRi3IBiFQT91pl2vO2XlaCbvSnIipkam+H7Q0bHn1U7qqikHWAAOE bmTsfeOScE3dAmDpxMYRFYXuoRsngBwCoDvUy/ON+MvxkCliLwPlAT6iBhMHUJnj88Bk PDTdWv+Q3JUbck/8zviFYPjaJrGn1n7ArY2Yx620WsPw2JoMRcGc7lLUUfoLeHT4hNa6 3YsA== X-Gm-Message-State: AOAM531FGTMswtEI1qRx8J34qXmu6IB/YJrrFzlOd0Hdl3CqBJbD2U6g pQ3yLfitC6NcyDdOi3i0WCbeph5b9w== X-Google-Smtp-Source: ABdhPJx0MGh1esHygyGmugyNFtFbLjJ17xcPDf/Zl3jsEQ9VM2CtXd7GdNQW4w6TBVnK0kcjEDJ2MCAADA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:914f:: with SMTP id q73mr9847672qvq.39.1632488089835; Fri, 24 Sep 2021 05:54:49 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:52 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-24-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 23/30] KVM: arm64: COCCI: remove_unused.cocci: remove unused ctxt and hypstate variables From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These local variables were added aggressively. Remove the ones that ended up not being used. Also, some of the added variables are missing a new line after their definition. Insert that for the remaining ones. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/exception.c | 5 ----- arch/arm64/kvm/hyp/include/hyp/switch.h | 9 ++++----- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 2 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 1 - arch/arm64/kvm/hyp/vhe/switch.c | 3 --- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 3 --- 6 files changed, 6 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index a08806efe031..bb0bc1f5568c 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -59,31 +59,26 @@ static void __ctxt_write_spsr_und(struct kvm_cpu_context *vcpu_ctxt, u64 val) static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { - const struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); return __ctxt_read_sys_reg(&vcpu_ctxt(vcpu), reg); } static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); } static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 44e76993a9b4..433601f79b94 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -37,6 +37,7 @@ extern struct exception_table_entry __stop___kvm_ex_table; static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); + /* * When the system doesn't support FP/SIMD, we cannot rely on * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an @@ -55,8 +56,8 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) /* Save the 32-bit only FPSIMD system register state */ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; @@ -65,8 +66,6 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); /* * We are about to set CPTR_EL2.TFP to trap all floating point * register accesses to EL2, however, the ARM ARM clearly states that @@ -220,8 +219,8 @@ static inline void __hyp_sve_save_host(struct kvm_vcpu *vcpu) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), &ctxt_fp_regs(vcpu_ctxt)->fpsr); @@ -395,7 +394,6 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *ctxt; u64 val; @@ -428,6 +426,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index df9cd2177e71..b750ff40a604 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -160,6 +160,7 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; @@ -179,6 +180,7 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + if (!vcpu_el1_is_32bit(vcpu)) return; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index d9a69e66158c..b90ec8db5864 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -37,7 +37,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu_hyps); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 5039910a7c80..7f926016cebe 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -34,7 +34,6 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static void __activate_traps(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val; ___activate_traps(vcpu_hyps); @@ -168,8 +167,6 @@ NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int ret; local_daif_mask(); diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 1571c144e9b0..1ded8be83c5a 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -64,7 +64,6 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; @@ -99,8 +98,6 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) */ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *host_ctxt; From patchwork Fri Sep 24 12:53:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B02DC433FE for ; Fri, 24 Sep 2021 13:24:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 659306105A for ; Fri, 24 Sep 2021 13:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345556AbhIXNZr (ORCPT ); Fri, 24 Sep 2021 09:25:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346122AbhIXNZE (ORCPT ); Fri, 24 Sep 2021 09:25:04 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8910C03401C for ; Fri, 24 Sep 2021 05:54:52 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d9-20020ac86149000000b002a6d33107c5so751150qtm.8 for ; Fri, 24 Sep 2021 05:54:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AJoOKnLyUcMeZ4bJL08afGQB1XmHWoCmbvvxsT90R1w=; b=TMcksozilNhor18k73HxG4gyzfpGIMQctnpgCoaFfUQvCdISS/VjwUjfT8F/wp9IsG PSTiJU2S+0umLVLDZpYsMf1X3pMpaNn1K70khBdD2PsgT9nUFaSWbqDDz8wWIAjd3R4y /7Ix41f24yh5cDWqxan5dgEUP+g+Bo06EILO7FWcopUsktqWgXUJKfnWaJu6p5D5A487 lIeMXBorGqKldcdOiUFmwoM5kyBRg8Qc7dDf/ZIiCMR3KJ7Uus5G7B0UWsPVzQESkXiF i4fjNfKyLfq1Hj9F3RZ2SsYLfw6L37gMokU2uTeaiLZVuxDA75sy3iTlTonCGSWvpuEB lnGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AJoOKnLyUcMeZ4bJL08afGQB1XmHWoCmbvvxsT90R1w=; b=bbjtz2RCzMl67Dt0sAT2HeNVft2a0C1pIVRX8B1gUlH5GZNW1HobmHTcc2AWCf/Q82 +XSvdFubC8WtDjbVfa0dem0kmSmUm9ecIbg498twJAQhJIia1QtTC1VJiiQ9nKxfF2pb IDMSWx4/BKCXH0bJLKhhAFr4lQDjgwtyrmPy+LVXV8BBfG+4vXMmhgt4GvYokVmXa2rR 87zDrUaqd266KzlzPWRR84D2AKMbnxV8utuyr2c9EX1V/2pA0O0cQV5/H2Fz6/256ePG TCUE8fNG5VjYtg0jA9RT+HnqJc+SPUEyw9PDGStvjhjaw685BbsOhlm+CnaXfvYyvHjN 1ndg== X-Gm-Message-State: AOAM531fiTEFiT29sdv5vfjk/H0Uk7C1/dXEzBFYTryT3pdtWP0v6Dto AgyZafmkCTnPyXUIvWex3rnVQhyTeg== X-Google-Smtp-Source: ABdhPJwRKEEdsILouY64mZAty9UzcrpjyOPrwEYFLAfse6nGd0/Ta0E8nqaALjj/ZCs3Dfp6mLzP+gYXZA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:492:: with SMTP id ay18mr9625511qvb.4.1632488091905; Fri, 24 Sep 2021 05:54:51 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:53 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-25-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 24/30] KVM: arm64: remove unused functions From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org __vcpu_write_spsr*() functions are not used anymore. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/exception.c | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index bb0bc1f5568c..fdfc809f61b8 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -67,21 +67,6 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) __ctxt_write_sys_reg(&vcpu_ctxt(vcpu), val, reg); } -static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr(&vcpu_ctxt(vcpu), val); -} - -static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr_abt(&vcpu_ctxt(vcpu), val); -} - -static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) -{ - __ctxt_write_spsr_und(&vcpu_ctxt(vcpu), val); -} - /* * This performs the exception entry at a given EL (@target_mode), stashing PC * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. From patchwork Fri Sep 24 12:53:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCCCBC433F5 for ; Fri, 24 Sep 2021 13:24:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA1E76105A for ; Fri, 24 Sep 2021 13:24:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346026AbhIXNZs (ORCPT ); Fri, 24 Sep 2021 09:25:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346154AbhIXNZG (ORCPT ); Fri, 24 Sep 2021 09:25:06 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF213C03401D for ; Fri, 24 Sep 2021 05:54:54 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d20-20020ac81194000000b002a53ffbd04dso28858693qtj.12 for ; Fri, 24 Sep 2021 05:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=h7avwCelVLDXrW65ivowhx5L97uC6oG1q5eldj7v03E=; b=HSj7gZGfE1Et2vmg3ht8IP44LkLd1mywKH9lU+M+SaqxPAhQomtoiMQDSdpuE1jlH0 VjMgAJVQhNWJ6IR0dBYuKqen9a8e6ZwcErT6sKmrbHkSet6C+kMNx2fpZCShgK7tJ8i5 Tfkk+GKBNixa2P/T7o4dn5RBzfOEocv2ZLWrqxfAOv+pXLMVKoRwTppWAkyyYZsYu5JF HdwISigqSEVHD9UFMlP8vbCfGs7HYpwttcpvRskdAZxLkbT4Mu8udkC7KtNx1ZUP139m RIxeaVjv2wRJ0JACk4h1lm/7OGP26Nfp3hla/AqHq6Sdwt8v4aXZ36GDhNxuFubJoe7E mwCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=h7avwCelVLDXrW65ivowhx5L97uC6oG1q5eldj7v03E=; b=FLmpUcVmuad067xSt/ds4aCPxyfPolO+nfAhsAVdJyHVNZKLYa2r2i5GzbnrPpU2XN X1jITg+RVY5Mr+y7mt5T/6UMJd4xFAvKZJRABDGvsfx+FdHMUm1flKZh7fVMEORVw74O UbF8sOxCDs/Tet/8qkouhFDWy8leqFhH9f0ShvS+kPxPPcquhCTfusEMrOs/Ern5Hsfk AIe+VcU214Pqo5E+/29/jVAre+ahP5Kj98aHprAY2uexn5Ou09VxUs2OI73WrYvupxWP nNr+xgk9r6IX08oCCrZeWrl1eOp4p52tc8AV1/7vS00e4C1ZXsZNIIOjsZEmwkzZICoJ /IPg== X-Gm-Message-State: AOAM532HlEIlpTDDD5Brycunw4gtdDteTvY791EczT4Igc3KtQf+f0CL 7sicH5HHMbahW/Gi9dq3HkxAJ1/1wA== X-Google-Smtp-Source: ABdhPJyL0t+coFpdVZMGEPsDbytWRe779HP4gPBcO06ejSv04CQaDTyhghz78sdKCUcnBSSiAMc7iJUGkA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a0c:d989:: with SMTP id y9mr9535459qvj.67.1632488093954; Fri, 24 Sep 2021 05:54:53 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:54 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-26-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 25/30] KVM: arm64: separate kvm_run() for protected VMs From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split kvm_run() for protected and non-protected VMs. Protected VMs support fewer features, separating it out will ease the refactoring and simplify the code. This patch starts only by replicated the code from the non-protected case, to make it easier to diff against future patches. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 119 ++++++++++++++++++++++++++++++- 1 file changed, 116 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index b90ec8db5864..9e79f97ba49e 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -119,7 +119,7 @@ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) } } -/* Restore VGICv3 state on non_VEH systems */ +/* Restore VGICv3 state on nVHE systems */ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { @@ -166,8 +166,110 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } -/* Switch to the guest for legacy non-VHE systems */ -int __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* Switch to the non-protected guest */ +static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) +{ + struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state; + struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; + struct kvm *kvm = kern_hyp_va(vcpu->kvm); + struct vgic_dist *vgic = &kvm->arch.vgic; + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; + u64 exit_code; + + /* + * Having IRQs masked via PMR when entering the guest means the GIC + * will not signal the CPU of interrupts of lower priority, and the + * only way to get out will be via guest exceptions. + * Naturally, we want to avoid this. + */ + if (system_uses_irq_prio_masking()) { + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + pmr_sync(); + } + + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; + set_hyp_running_vcpu(host_ctxt, vcpu); + guest_ctxt = &vcpu->arch.ctxt; + + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + + __sysreg_save_state_nvhe(host_ctxt); + /* + * We must flush and disable the SPE buffer for nVHE, as + * the translation regime(EL1&0) is going to be loaded with + * that of the guest. And we must do this before we change the + * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and + * before we load guest Stage1. + */ + __debug_save_host_buffers_nvhe(vcpu); + + kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + * + * Also, and in order to be able to deal with erratum #1319537 (A57) + * and #1319367 (A72), we must ensure that all VM-related sysreg are + * restored before we enable S2 translation. + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_state_nvhe(guest_ctxt); + + __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); + __activate_traps(vcpu); + + __hyp_vgic_restore_state(vcpu); + __timer_enable_traps(); + + __debug_switch_to_guest(vcpu); + + do { + struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); + set_hyp_running_vcpu(hyp_ctxt, vcpu); + + /* Jump in the fire! */ + exit_code = __guest_enter(guest_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + + __sysreg_save_state_nvhe(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(); + __hyp_vgic_save_state(vcpu); + + __deactivate_traps(vcpu_hyps); + __load_host_stage2(); + + __sysreg_restore_state_nvhe(host_ctxt); + + if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) + __fpsimd_save_fpexc32(vcpu); + + __debug_switch_to_host(vcpu); + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_restore_host_buffers_nvhe(vcpu); + + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + + /* Returning to host will clear PSR.I, remask PMR if needed */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQOFF); + + set_hyp_running_vcpu(host_ctxt, NULL); + + return exit_code; +} + +/* Switch to the protected guest */ +static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); @@ -268,6 +370,17 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } +/* Switch to the guest for non-VHE and protected KVM systems */ +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + vcpu = kern_hyp_va(vcpu); + + if (likely(!kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)))) + return __kvm_vcpu_run_nvhe(vcpu); + else + return __kvm_vcpu_run_pvm(vcpu); +} + void __noreturn hyp_panic(void) { u64 spsr = read_sysreg_el2(SYS_SPSR); From patchwork Fri Sep 24 12:53:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3745FC433F5 for ; Fri, 24 Sep 2021 13:24:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 205D461050 for ; Fri, 24 Sep 2021 13:24:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346183AbhIXNZz (ORCPT ); Fri, 24 Sep 2021 09:25:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343552AbhIXNZR (ORCPT ); Fri, 24 Sep 2021 09:25:17 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCD2CC034023 for ; Fri, 24 Sep 2021 05:54:57 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id s13-20020adfeccd000000b00160531902f4so286417wro.2 for ; Fri, 24 Sep 2021 05:54:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nCXG0l+R9HIObQKW2P1KEyRjnYx24qX/arXoNkRGWM8=; b=bMETGKRZDsz/QLyWmlXkyVgIPXXIPgZPSQ+bmmIPAzzvTvXeq4PAlaA1o2khsZ2dh7 hBNR5QqKDv2VXDgHc1rRqYT9wKChVUg2/OZHtAd6zlOm2g4HjZNM9TPWEO4bv4BxB6Qb g6G41+9Tk7ijHNcaP9/MxEwBz6R501wDU7EbBaGWYdJO4na0sllQa5q9OgYxpzMkfULB Kn/PDJ6p7qfvKukg7M9lfhYb/xfJstXcYLear6w70Opn6svw08NeeQRmTsvhtQSgF+0O mEr+wp8I0aGqt1vlM4lBjOLPvhgaM5zPqNPGcpptUZddXYcT+M1mGTG7Mndr5rkBZ+au XluQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nCXG0l+R9HIObQKW2P1KEyRjnYx24qX/arXoNkRGWM8=; b=uC/HktOKUvJqgRWdjdzUue7+8Wya79AkXP8L2esia2ldlzXx7QDC/RzHpoiFu+JiBV vO4mUPSZBkTOBqjl/PaETVsGIh6x3jxGsvQ2YUHeKR95hPBfnIiapE85aNU9nJHQhsai DwixBGHFFvq5wv8iL2lEXkdGub2BNhrdnyNozIVa25Wsx3Sfb9dWGNoVW2+AqIWtK+8T bOYq+A4RMfJBebU0baCcfIGrHUxaE59cmabAhg5sDtDPXzRS4U6HVCiuA9Z49W0r2vKt 4O3dngr3IEj2hIc+dxbWKoEGDX+tZ1m8bUKq/SzxzkJTpmGfO9S0W7RkhD7MuQ4PbqvD J7cw== X-Gm-Message-State: AOAM532xlrtBJcYbgFyb1z2Im09U2XWgpI21bbxkfPtgXy+sU3hzcata SRxw+7nu/ex8Tru9TayC/4mXqzW8CQ== X-Google-Smtp-Source: ABdhPJyLaWDPi72Md06wxlso8tc+C+HUsJOKyBd5N7LvM9mwCmnEhGULtENvFCUBKM2sh1nwCEuENi62Xw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:600c:2184:: with SMTP id e4mr1925558wme.61.1632488096350; Fri, 24 Sep 2021 05:54:56 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:55 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-27-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 26/30] KVM: arm64: pVM activate_traps to use vcpu_ctxt and vcpu_hyp_state From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor protected VM activate_traps not to use vcpu. Protected 32 bit VMs are not supported, and therefore the code for setting the floating point traps at 32 bits isn't needed for the pvm case. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/switch.c | 35 +++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 9e79f97ba49e..0d654b324612 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,9 +34,10 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); -static void __activate_traps(struct kvm_vcpu *vcpu) +/* Activate traps for protected guests */ +static void __activate_traps_pvm(struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); u64 val; ___activate_traps(vcpu_hyps); @@ -44,26 +45,36 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; - if (!update_fp_enabled(vcpu)) { - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; - __activate_traps_fpsimd32(vcpu); - } write_sysreg(val, cptr_el2); write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; - isb(); /* * At this stage, and thanks to the above isb(), S2 is * configured and enabled. We can now restore the guest's S1 * configuration: SCTLR, and only then TCR. */ - write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, SCTLR_EL1), SYS_SCTLR); isb(); - write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR); + write_sysreg_el1(ctxt_sys_reg(vcpu_ctxt, TCR_EL1), SYS_TCR); + } +} + +/* Activate traps for non-protected guests in nVHE */ +static void __activate_traps_nvhe(struct kvm_vcpu *vcpu) +{ + struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); + struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; + + __activate_traps_pvm(vcpu_ctxt, vcpu_hyps); + + if (!update_fp_enabled(vcpu)) { + u64 val = CPTR_EL2_DEFAULT | CPTR_EL2_TTA | CPTR_EL2_TAM | + CPTR_EL2_TFP | CPTR_EL2_TZ; + __activate_traps_fpsimd32(vcpu); + write_sysreg(val, cptr_el2); } } @@ -219,7 +230,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); - __activate_traps(vcpu); + __activate_traps_nvhe(vcpu); __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); @@ -321,7 +332,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); - __activate_traps(vcpu); + __activate_traps_pvm(vcpu_ctxt, vcpu_hyps); __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); From patchwork Fri Sep 24 12:53:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 633D5C433F5 for ; Fri, 24 Sep 2021 13:24:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4ADA46105A for ; Fri, 24 Sep 2021 13:24:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345722AbhIXNZt (ORCPT ); Fri, 24 Sep 2021 09:25:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345422AbhIXNZT (ORCPT ); Fri, 24 Sep 2021 09:25:19 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3236BC034024 for ; Fri, 24 Sep 2021 05:54:59 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id q18-20020a252a12000000b005b263fcc92eso3222660ybq.8 for ; Fri, 24 Sep 2021 05:54:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6pASbSsT/Yef+KNtWRsM7REwBs8GiQAxHWrM4DAwh1s=; b=oYStHblXmyttk5JHb7mn/QeeOuyrSd9jwYFYzrB6dav9dTynlqaP3PEkx4wfPblOZ8 xp41B097EPMoLP9xWck4D48TFdIQCmqPXQLfPXAMJv8vr0BPDf/h6yIlGb/6jibiVK96 U1S32OOmVXqWVIa5Ays6LyLJ9/5Su8IRzQJak9EaCdedq4ixS1rUxKrxoxBsMGgaEK38 r4xLPHw/Dijs2WEajb/iV+BetQ3xx/Fl+sTIBpcnuchQrWIk9VilcecF9ysdUfiiwQh9 uKgtpvVv4hBbZp9kgr+K1J/uKpBrB+7OEQFPv5EPVyEmdR8t9M/6KEfoN54/27SEYeAi 7KZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6pASbSsT/Yef+KNtWRsM7REwBs8GiQAxHWrM4DAwh1s=; b=KCtva5TDqGe+BXnbMCf6WIrlMIXPvS0/esMmUbsD1897o8sBsBSGC58yRX1k4rW4R4 Hulj2UBTGSoKDzF0NCXUhcQgXNLySFPDnXLrMdtR409a4voBPOCEjQrpv4MMkZpH+9pc e8MmVjJdaRuTnH3ZH6jSOMufJe0QfYc6B4Zy7RwSOz1oOrbqycsKVYgbJFzU0WY13lAp Fr0cntLynow4wscIBmGBrw6jDuCUyPO/N2pJG2/3GPqsyynudZwaIl0YoOwJO2Uojy0X /3gfP/kP5BA3RY2TPeP3qJlpf2PS4wYtm+ZkkqyztNqWoL/sZjkqux+7D/3BXZt8Lt6G MrBw== X-Gm-Message-State: AOAM532eDj35scaXB/HtFaERXCT++IByb7rgMTczoIl63Cfn0Y5OFuDz cZuaceTUHKZAE3kZ6F6g0xQ4YwPgzw== X-Google-Smtp-Source: ABdhPJz5QSOUNSvYae9zYQoRjlhXT3uab0VnL6rjHZdQg2Hs0A2XNNOhm8ErqHE0DojL6wYHc3wByV8RRw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a25:bdc5:: with SMTP id g5mr12073562ybk.403.1632488098448; Fri, 24 Sep 2021 05:54:58 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:56 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-28-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 27/30] KVM: arm64: remove unsupported pVM features From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove code for unsupported features for protected VMs from __kvm_vcpu_run_pvm(). Do not run unsupported code (SVE) in __hyp_handle_fpsimd(). Enforcement of this is in the fixed features patch series [1]. The code removed or disabled is related to the following: - PMU - Debug - Arm32 - SPE - SVE [1] Link: https://lore.kernel.org/kvmarm/20210922124704.600087-1-tabba@google.com/T/#u Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 5 ++-- arch/arm64/kvm/hyp/nvhe/switch.c | 36 ------------------------- 2 files changed, 3 insertions(+), 38 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 433601f79b94..3ef429cfd9af 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -232,6 +232,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); + const bool is_protected = is_nvhe_hyp_code() && kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)); bool sve_guest, sve_host; u8 esr_ec; u64 reg; @@ -239,7 +240,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) if (!system_supports_fpsimd()) return false; - if (system_supports_sve()) { + if (system_supports_sve() && !is_protected) { sve_guest = hyp_state_has_sve(vcpu_hyps); sve_host = hyp_state_flags(vcpu_hyps) & KVM_ARM64_HOST_SVE_IN_USE; } else { @@ -247,7 +248,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) sve_host = false; } - esr_ec = kvm_vcpu_trap_get_class(vcpu); + esr_ec = kvm_hyp_state_trap_get_class(vcpu_hyps); if (esr_ec != ESR_ELx_EC_FP_ASIMD && esr_ec != ESR_ELx_EC_SVE) return false; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 0d654b324612..aa0dc4f0433b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -288,7 +288,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; - bool pmu_switch_needed; u64 exit_code; /* @@ -306,29 +305,10 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) set_hyp_running_vcpu(host_ctxt, vcpu); guest_ctxt = &vcpu->arch.ctxt; - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); - __sysreg_save_state_nvhe(host_ctxt); - /* - * We must flush and disable the SPE buffer for nVHE, as - * the translation regime(EL1&0) is going to be loaded with - * that of the guest. And we must do this before we change the - * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and - * before we load guest Stage1. - */ - __debug_save_host_buffers_nvhe(vcpu); kvm_adjust_pc(vcpu_ctxt, vcpu_hyps); - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - * - * Also, and in order to be able to deal with erratum #1319537 (A57) - * and #1319367 (A72), we must ensure that all VM-related sysreg are - * restored before we enable S2 translation. - */ - __sysreg32_restore_state(vcpu); __sysreg_restore_state_nvhe(guest_ctxt); __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); @@ -337,8 +317,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __hyp_vgic_restore_state(vcpu); __timer_enable_traps(); - __debug_switch_to_guest(vcpu); - do { struct kvm_cpu_context *hyp_ctxt = this_cpu_ptr(&kvm_hyp_ctxt); set_hyp_running_vcpu(hyp_ctxt, vcpu); @@ -350,7 +328,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) } while (fixup_guest_exit(vcpu, vgic, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); - __sysreg32_save_state(vcpu); __timer_disable_traps(); __hyp_vgic_save_state(vcpu); @@ -359,19 +336,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(host_ctxt); - if (hyp_state_flags(vcpu_hyps) & KVM_ARM64_FP_ENABLED) - __fpsimd_save_fpexc32(vcpu); - - __debug_switch_to_host(vcpu); - /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. - */ - __debug_restore_host_buffers_nvhe(vcpu); - - if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); - /* Returning to host will clear PSR.I, remask PMR if needed */ if (system_uses_irq_prio_masking()) gic_write_pmr(GIC_PRIO_IRQOFF); From patchwork Fri Sep 24 12:53:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89533C433FE for ; Fri, 24 Sep 2021 13:24:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6702961050 for ; Fri, 24 Sep 2021 13:24:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345544AbhIXN0C (ORCPT ); Fri, 24 Sep 2021 09:26:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345621AbhIXNZU (ORCPT ); Fri, 24 Sep 2021 09:25:20 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D47CC03402B for ; Fri, 24 Sep 2021 05:55:01 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id b20-20020ac87fd4000000b002a69ee90efbso28909245qtk.11 for ; Fri, 24 Sep 2021 05:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=y44fzIX1ftLMcQsIMexqk7MlH9WcFM4Uf4DFLHOV8zw=; b=kbcSGdX6W1DTAU32KsoIRyu5RXf3fqm9Ft5iyr3m/VyqUU0YPxHG/sYle+VkB16whN 9piwWyrAzf3Hxu30Th595VibMFLO2rpVfcxRv8/4Vr3TxfZOOinM57oyVhs3Lyv3EEw9 3CD8zN0/CXFY+HOMRwnQZFvaTz/vPVp1e2YFjT8nonSduXGwPCwDIb6KxRjLfSKyrEtU rm8/pwdZ5NhnzCLYTqiDn9sxDW66NgWRbLMVtkQ6y1APguvqZIitvKc9pajmjDzx7G2S D3fDEfhzx/5Av6NBQz11mDPmB0KHsuaTGp3D1kawfgMWZ5D6+PDgqcBdRjHtYliJMgMP flYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=y44fzIX1ftLMcQsIMexqk7MlH9WcFM4Uf4DFLHOV8zw=; b=aNf2u92nFgqPzntExL5r3IL/zrjGTP+7e1NkYlp1Bu+3yH/AIj4hfJO+2HSexQLAFs hPZRj1F+a7/Hb/rtloov5LBwhRIwjdaIcdLq6QznPozMHJkvMqV+NIgFFLaedrvRLVoS Cu7cBKfaVDWedV8aqEsj/in0Pb3ibseq13VXl7UBCILX/Qlkyf6o98YeB09d/Nf0pIVv iDHgi0mMh4x/mxKhYUM41jojuxIW9bf95BgbjEcqonwZHKJb+uGGObGIno6vFKsMXQZm pVBFi1b7x6CAg36xSeV7XdfZwYOphOH+HO9qytCTeJkyLrLHgzbVaybtOSZ6SHpRMXQJ 0peA== X-Gm-Message-State: AOAM530OtkHYv2dmIohNQkcvtjzowwKu6vUgAWUb8Xvmu8YkPFnjJycQ S1L4QdhRNJ55aJ7gbgVcFYlhICaELw== X-Google-Smtp-Source: ABdhPJwzDg459e5SoIW1gM5OBltgwgX9FKJl95y+IxUYvre8ucu6ik03AQSvZPZnEMDcAv3LYlxmQdXBGA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:2e4:: with SMTP id h4mr9596859qvu.3.1632488100413; Fri, 24 Sep 2021 05:55:00 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:57 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-29-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 28/30] KVM: arm64: reduce scope of pVM fixup_guest_exit to hyp_state and kvm_cpu_ctxt From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Reduce the scope of fixup_guest_exit for protected VMs to only need hyp_state and kvm_cpu_ctxt Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/hyp/switch.h | 23 +++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++----- arch/arm64/kvm/hyp/vhe/switch.c | 3 +-- 3 files changed, 22 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 3ef429cfd9af..ea9571f712c6 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -423,11 +423,8 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, u64 *exit_code) +static inline bool _fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps, u64 *exit_code) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) hyp_state_fault(vcpu_hyps).esr_el2 = read_sysreg_el2(SYS_ESR); @@ -518,6 +515,24 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgi return true; } +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; + struct vcpu_hyp_state *hyps = &vcpu->arch.hyp_state; + // TODO: create helper for getting VA + struct kvm *kvm = vcpu->kvm; + + if (is_nvhe_hyp_code()) + kvm = kern_hyp_va(kvm); + + return _fixup_guest_exit(vcpu, &kvm->arch.vgic, ctxt, hyps, exit_code); +} + +static inline bool fixup_pvm_guest_exit(struct kvm_vcpu *vcpu, struct vgic_dist *vgic, struct kvm_cpu_context *ctxt, struct vcpu_hyp_state *hyps, u64 *exit_code) +{ + return _fixup_guest_exit(vcpu, vgic, ctxt, hyps, exit_code); +} + static inline void __kvm_unexpected_el2_exception(void) { extern char __guest_exit_panic[]; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index aa0dc4f0433b..1920aebbe49a 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -182,8 +182,6 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &vcpu->arch.hyp_state; struct kvm_cpu_context *vcpu_ctxt = &vcpu->arch.ctxt; - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; bool pmu_switch_needed; @@ -245,7 +243,7 @@ static int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_guest_exit(vcpu, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); @@ -285,7 +283,6 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_dist *vgic = &kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -325,7 +322,7 @@ static int __kvm_vcpu_run_pvm(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_pvm_guest_exit(vcpu, &kvm->arch.vgic, vcpu_ctxt, vcpu_hyps, &exit_code)); __sysreg_save_state_nvhe(guest_ctxt); __timer_disable_traps(); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 7f926016cebe..4a05aff37325 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -110,7 +110,6 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; u64 exit_code; @@ -148,7 +147,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) exit_code = __guest_enter(guest_ctxt); /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, vgic, &exit_code)); + } while (fixup_guest_exit(vcpu, &exit_code)); sysreg_save_guest_state_vhe(guest_ctxt); From patchwork Fri Sep 24 12:53:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 354D6C433EF for ; Fri, 24 Sep 2021 13:24:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16BE26105A for ; Fri, 24 Sep 2021 13:24:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345603AbhIXN0D (ORCPT ); Fri, 24 Sep 2021 09:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344794AbhIXNZU (ORCPT ); Fri, 24 Sep 2021 09:25:20 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCE6DC03402D for ; Fri, 24 Sep 2021 05:55:03 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id a17-20020adfed11000000b00160525e875aso466577wro.23 for ; Fri, 24 Sep 2021 05:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=Y3zYm0SRj3FD4wkWCBfd29+CqKAr1s9F0wHk7odJZK4=; b=iha4WANdOO0Eeb6o9Nbr9dDay/dtYchT70gSTmdOATbdUDAzq45HJHpryZLf+bWtXw WwEc8l1bqn9aL44QsgyAjzH23MsnYLNjg1otAMr7deyT0zFQa5P7u6AS4p6eBj6u+yBI SW2tCnP9/o7m/WQ9gblP7hQDvgKZgGqMZxYfar0TcYuuwqeN8DmGhjLY+KF/3GFX5W2o EKvHBUErTCE4G+wEEVTzyldkamEFaHJ6v4i3fZ9cJQX197rBjECmA848mVnsdr607xfP Nmq0WzF7uujiC1kaoNLxxSZChlP+9Ph6Wwc6dh6jeD+6B/rXWXu6Mi9jMWWIP0JOVMo5 JZGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=Y3zYm0SRj3FD4wkWCBfd29+CqKAr1s9F0wHk7odJZK4=; b=KsFXAVUDWAoGuJkvE7iieydN2VF2obw0a+qFKaHQb4EfY8Zr0NpoDLBfUJruhvalCW w+RWs8A8AzFiBxt0sPGL5alnc4DA3rGL5xfYZMYeXIiMQCZuXmR+ANcs142Dmto8u+JP Au+vxpFVKOEb4jrE7OgFUVh9wfdH4pHQlLXkDyjfuTvx5o6A1rBESiR9EcdkUOa6sPDI JaXP5RpkZH0YtmtTeW0TBtatleXEF2OCq5/Ce+iX7yCYcVV17fz4xW+5qYRtCGcM+/qm UcY4lE+WpMewJFKY9Wz4FO048CqbyvcszcITv75+xDitOD7Lt1pEkygwW1ZqNBJthK4B Yhzw== X-Gm-Message-State: AOAM532Cg3Lhq5KlQuIm4QAg/APAfss4E/xMe/97azwTXGyMPtKOUz4A 7Np5fKwQcJf8OMTNkjRLe6BoQajkqw== X-Google-Smtp-Source: ABdhPJxfp36BOamuqS8wiRovtsjqG9I1ct8dLdKOW3rX3k86iF2BuhU/75LwpcAa4cCebcJScLWdvThjLw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:adf:b652:: with SMTP id i18mr11136561wre.117.1632488102393; Fri, 24 Sep 2021 05:55:02 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:58 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-30-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 29/30] [DONOTMERGE] Remove Coccinelle scripts added for refactoring From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The scripts are not needed anymore, and were included for the git history. Signed-off-by: Fuad Tabba --- cocci_refactor/add_ctxt.cocci | 169 ------------------------ cocci_refactor/add_hypstate.cocci | 125 ------------------ cocci_refactor/hyp_ctxt.cocci | 38 ------ cocci_refactor/range.cocci | 50 ------- cocci_refactor/remove_unused.cocci | 69 ---------- cocci_refactor/test.cocci | 20 --- cocci_refactor/use_ctxt.cocci | 32 ----- cocci_refactor/use_ctxt_access.cocci | 39 ------ cocci_refactor/use_hypstate.cocci | 63 --------- cocci_refactor/vcpu_arch_ctxt.cocci | 13 -- cocci_refactor/vcpu_declr.cocci | 59 --------- cocci_refactor/vcpu_flags.cocci | 10 -- cocci_refactor/vcpu_hyp_accessors.cocci | 35 ----- cocci_refactor/vcpu_hyp_state.cocci | 30 ----- cocci_refactor/vgic3_cpu.cocci | 118 ----------------- 15 files changed, 870 deletions(-) delete mode 100644 cocci_refactor/add_ctxt.cocci delete mode 100644 cocci_refactor/add_hypstate.cocci delete mode 100644 cocci_refactor/hyp_ctxt.cocci delete mode 100644 cocci_refactor/range.cocci delete mode 100644 cocci_refactor/remove_unused.cocci delete mode 100644 cocci_refactor/test.cocci delete mode 100644 cocci_refactor/use_ctxt.cocci delete mode 100644 cocci_refactor/use_ctxt_access.cocci delete mode 100644 cocci_refactor/use_hypstate.cocci delete mode 100644 cocci_refactor/vcpu_arch_ctxt.cocci delete mode 100644 cocci_refactor/vcpu_declr.cocci delete mode 100644 cocci_refactor/vcpu_flags.cocci delete mode 100644 cocci_refactor/vcpu_hyp_accessors.cocci delete mode 100644 cocci_refactor/vcpu_hyp_state.cocci delete mode 100644 cocci_refactor/vgic3_cpu.cocci diff --git a/cocci_refactor/add_ctxt.cocci b/cocci_refactor/add_ctxt.cocci deleted file mode 100644 index 203644944ace..000000000000 --- a/cocci_refactor/add_ctxt.cocci +++ /dev/null @@ -1,169 +0,0 @@ -// - -/* -spatch --sp-file add_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore arch/arm64/kvm/hyp/nvhe/debug-sr.c --ignore arch/arm64/kvm/hyp/vhe/debug-sr.c --include-headers --in-place -*/ - - -@exists@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier fc; -@@ -<... -( - struct kvm_vcpu *vcpu = NULL; -+ struct kvm_cpu_context *vcpu_ctxt; -| - struct kvm_vcpu *vcpu = ...; -+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -| - struct kvm_vcpu *vcpu; -+ struct kvm_cpu_context *vcpu_ctxt; -) -<... - vcpu = ...; -+ vcpu_ctxt = &vcpu_ctxt(vcpu); -...> -fc(..., vcpu, ...) -...> - -@exists@ -identifier func != {kvm_arch_vcpu_run_pid_change}; -identifier fc != {vcpu_ctxt}; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -func(..., struct kvm_vcpu *vcpu, ...) { -+ struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -expression a, b; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -iterator name kvm_for_each_vcpu; -identifier fc; -@@ -kvm_for_each_vcpu(a, vcpu, b) - { -+ vcpu_ctxt = &vcpu_ctxt(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -identifier vcpu_ctxt, vcpu; -iterator name kvm_for_each_vcpu; -type T; -identifier x; -statement S1, S2; -@@ -kvm_for_each_vcpu(...) - { -- vcpu_ctxt = &vcpu_ctxt(vcpu); -... when != S1 -+ vcpu_ctxt = &vcpu_ctxt(vcpu); - S2 - ... when any - } - -@ -disable optional_qualifier -exists -@ -identifier vcpu; -identifier vcpu_ctxt; -@@ -<... - const struct kvm_vcpu *vcpu = ...; -- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -...> - -@disable optional_qualifier@ -identifier func, vcpu; -identifier vcpu_ctxt; -@@ -func(..., const struct kvm_vcpu *vcpu, ...) { -- struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -+ const struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); -... - } - -@exists@ -expression r1, r2; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -( -- vcpu_gp_regs(vcpu) -+ ctxt_gp_regs(vcpu_ctxt) -| -- vcpu_spsr_abt(vcpu) -+ ctxt_spsr_abt(vcpu_ctxt) -| -- vcpu_spsr_und(vcpu) -+ ctxt_spsr_und(vcpu_ctxt) -| -- vcpu_spsr_irq(vcpu) -+ ctxt_spsr_irq(vcpu_ctxt) -| -- vcpu_spsr_fiq(vcpu) -+ ctxt_spsr_fiq(vcpu_ctxt) -| -- vcpu_fp_regs(vcpu) -+ ctxt_fp_regs(vcpu_ctxt) -| -- __vcpu_sys_reg(vcpu, r1) -+ ctxt_sys_reg(vcpu_ctxt, r1) -| -- __vcpu_read_sys_reg(vcpu, r1) -+ __ctxt_read_sys_reg(vcpu_ctxt, r1) -| -- __vcpu_write_sys_reg(vcpu, r1, r2) -+ __ctxt_write_sys_reg(vcpu_ctxt, r1, r2) -| -- __vcpu_write_spsr(vcpu, r1) -+ __ctxt_write_spsr(vcpu_ctxt, r1) -| -- __vcpu_write_spsr_abt(vcpu, r1) -+ __ctxt_write_spsr_abt(vcpu_ctxt, r1) -| -- __vcpu_write_spsr_und(vcpu, r1) -+ __ctxt_write_spsr_und(vcpu_ctxt, r1) -| -- vcpu_pc(vcpu) -+ ctxt_pc(vcpu_ctxt) -| -- vcpu_cpsr(vcpu) -+ ctxt_cpsr(vcpu_ctxt) -| -- vcpu_mode_is_32bit(vcpu) -+ ctxt_mode_is_32bit(vcpu_ctxt) -| -- vcpu_set_thumb(vcpu) -+ ctxt_set_thumb(vcpu_ctxt) -| -- vcpu_get_reg(vcpu, r1) -+ ctxt_get_reg(vcpu_ctxt, r1) -| -- vcpu_set_reg(vcpu, r1, r2) -+ ctxt_set_reg(vcpu_ctxt, r1, r2) -) - - -/* Handles one case of a call within a call. */ -@@ -expression r1, r2; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -@@ -- vcpu_pc(vcpu) -+ ctxt_pc(vcpu_ctxt) - -// diff --git a/cocci_refactor/add_hypstate.cocci b/cocci_refactor/add_hypstate.cocci deleted file mode 100644 index e8635d0e8f57..000000000000 --- a/cocci_refactor/add_hypstate.cocci +++ /dev/null @@ -1,125 +0,0 @@ -// - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" -spatch --sp-file add_hypstate.cocci $FILES --in-place -*/ - -@exists@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier fc; -@@ -<... -( - struct kvm_vcpu *vcpu = NULL; -+ struct vcpu_hyp_state *hyps; -| - struct kvm_vcpu *vcpu = ...; -+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -| - struct kvm_vcpu *vcpu; -+ struct vcpu_hyp_state *hyps; -) -<... - vcpu = ...; -+ hyps = &hyp_state(vcpu); -...> -fc(..., vcpu, ...) -...> - -@exists@ -identifier func != {kvm_arch_vcpu_run_pid_change}; -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier fc; -@@ -func(..., struct kvm_vcpu *vcpu, ...) { -+ struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -expression a, b; -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -iterator name kvm_for_each_vcpu; -identifier fc; -@@ -kvm_for_each_vcpu(a, vcpu, b) - { -+ hyps = &hyp_state(vcpu); -<+... -fc(..., vcpu, ...) -...+> - } - -@@ -identifier hyps, vcpu; -iterator name kvm_for_each_vcpu; -statement S1, S2; -@@ -kvm_for_each_vcpu(...) - { -- hyps = &hyp_state(vcpu); -... when != S1 -+ hyps = &hyp_state(vcpu); - S2 - ... when any - } - -@ -disable optional_qualifier -exists -@ -identifier vcpu, hyps; -@@ -<... - const struct kvm_vcpu *vcpu = ...; -- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -...> - - -@@ -identifier func, vcpu, hyps; -@@ -func(..., const struct kvm_vcpu *vcpu, ...) { -- struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -+ const struct vcpu_hyp_state *hyps = &hyp_state(vcpu); -... - } - -@exists@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -@@ -( -- vcpu_hcr_el2(vcpu) -+ hyp_state_hcr_el2(hyps) -| -- vcpu_mdcr_el2(vcpu) -+ hyp_state_mdcr_el2(hyps) -| -- vcpu_vsesr_el2(vcpu) -+ hyp_state_vsesr_el2(hyps) -| -- vcpu_fault(vcpu) -+ hyp_state_fault(hyps) -| -- vcpu_flags(vcpu) -+ hyp_state_flags(hyps) -| -- vcpu_has_sve(vcpu) -+ hyp_state_has_sve(hyps) -| -- vcpu_has_ptrauth(vcpu) -+ hyp_state_has_ptrauth(hyps) -| -- kvm_arm_vcpu_sve_finalized(vcpu) -+ kvm_arm_hyp_state_sve_finalized(hyps) -) - -// diff --git a/cocci_refactor/hyp_ctxt.cocci b/cocci_refactor/hyp_ctxt.cocci deleted file mode 100644 index af7974e3a502..000000000000 --- a/cocci_refactor/hyp_ctxt.cocci +++ /dev/null @@ -1,38 +0,0 @@ -// Remove vcpu if all we're using is hypstate and ctxt - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]")" -spatch --sp-file hyp_ctxt.cocci $FILES --in-place; -*/ - -// - -@remove@ -identifier func !~ "^trap_|^access_|dbg_to_reg|check_pmu_access_disabled|match_mpidr|get_ctr_el0|emulate_cp|unhandled_cp_access|index_to_sys_reg_desc|kvm_pmu_|pmu_counter_idx_valid|reset_|read_from_write_only|write_to_read_only|undef_access|vgic_|kvm_handle_|handle_sve|handle_smc|handle_no_fpsimd|id_visibility|reg_to_dbg|ptrauth_visibility|sve_visibility|kvm_arch_sched_in|kvm_arch_vcpu_|kvm_vcpu_pmu_|kvm_psci_|kvm_arm_copy_fw_reg_indices|kvm_arm_pvtime_|kvm_trng_|kvm_arm_timer_"; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -@@ -func(..., -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps -,...) { -?- struct vcpu_hyp_state *hyps_remove = ...; -?- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - } - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier remove.func; -@@ - func( -- vcpu -+ vcpu_ctxt, vcpu_hyps - , ...) - -// \ No newline at end of file diff --git a/cocci_refactor/range.cocci b/cocci_refactor/range.cocci deleted file mode 100644 index d99b9ee30657..000000000000 --- a/cocci_refactor/range.cocci +++ /dev/null @@ -1,50 +0,0 @@ - - -// - -/* - FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file range.cocci $FILES -*/ - -@initialize:python@ -@@ -starts = ("start", "begin", "from", "floor", "addr", "kaddr") -ends = ("size", "length", "len") - -//ends = ("end", "to", "ceiling", "size", "length", "len") - - -@start_end@ -identifier f; -type A, B; -identifier start, end; -parameter list[n] ps; -@@ -f(ps, A start, B end, ...) { -... -} - -@script:python@ -start << start_end.start; -end << start_end.end; -ta << start_end.A; -tb << start_end.B; -@@ - -if ta != tb and tb != "size_t": - cocci.include_match(False) -elif not any(x in start for x in starts) and not any(x in end for x in ends): - cocci.include_match(False) - -@@ -identifier f = start_end.f; -expression list[start_end.n] xs; -expression a, b; -@@ -( -* f(xs, a, a, ...) -| -* f(xs, a, a - b, ...) -) - -// \ No newline at end of file diff --git a/cocci_refactor/remove_unused.cocci b/cocci_refactor/remove_unused.cocci deleted file mode 100644 index c06278398198..000000000000 --- a/cocci_refactor/remove_unused.cocci +++ /dev/null @@ -1,69 +0,0 @@ -// - -/* -spatch --sp-file remove_unused.cocci --dir arch/arm64/kvm/hyp --in-place --include-headers --force-diff -*/ - -@@ -identifier hyps; -@@ -{ -... -( -- struct vcpu_hyp_state *hyps = ...; -| -- struct vcpu_hyp_state *hyps; -) -... when != hyps - when != if (...) { <+...hyps...+> } -?- hyps = ...; -... when != hyps - when != if (...) { <+...hyps...+> } -} - -@@ -identifier vcpu_ctxt; -@@ -{ -... -( -- struct kvm_cpu_context *vcpu_ctxt = ...; -| -- struct kvm_cpu_context *vcpu_ctxt; -) -... when != vcpu_ctxt - when != if (...) { <+...vcpu_ctxt...+> } -?- vcpu_ctxt = ...; -... when != vcpu_ctxt - when != if (...) { <+...vcpu_ctxt...+> } -} - -@@ -identifier x; -identifier func; -statement S; -@@ -func(...) - { -... -struct kvm_cpu_context *x = ...; -+ -S -... - } - -@@ -identifier x; -identifier func; -statement S; -@@ -func(...) - { -... -struct vcpu_hyp_state *x = ...; -+ -S -... - } - -// diff --git a/cocci_refactor/test.cocci b/cocci_refactor/test.cocci deleted file mode 100644 index 5eb685240ce7..000000000000 --- a/cocci_refactor/test.cocci +++ /dev/null @@ -1,20 +0,0 @@ -/* - FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file test.cocci $FILES - -*/ - -@r@ -identifier fn; -@@ -fn(...) { - hello; - ... -} - -@@ -identifier r.fn; -@@ -static fn(...) { -+ world; - ... -} diff --git a/cocci_refactor/use_ctxt.cocci b/cocci_refactor/use_ctxt.cocci deleted file mode 100644 index f3f961f567fd..000000000000 --- a/cocci_refactor/use_ctxt.cocci +++ /dev/null @@ -1,32 +0,0 @@ -// -/* -spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place -spatch --sp-file use_ctxt.cocci --dir arch/arm64/kvm/hyp --ignore debug-sr --include-headers --in-place -*/ - -@remove_vcpu@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier ctxt_remove; -identifier func !~ "(reset_unknown|reset_val|kvm_pmu_valid_counter_mask|reset_pmcr|kvm_arch_vcpu_in_kernel|__vgic_v3_)"; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt -, ...) { -- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } -} - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -identifier func = remove_vcpu.func; -@@ -func( -- vcpu -+ vcpu_ctxt - , ...) - -// diff --git a/cocci_refactor/use_ctxt_access.cocci b/cocci_refactor/use_ctxt_access.cocci deleted file mode 100644 index 74f94141e662..000000000000 --- a/cocci_refactor/use_ctxt_access.cocci +++ /dev/null @@ -1,39 +0,0 @@ -// - -/* -spatch --sp-file use_ctxt_access.cocci --dir arch/arm64/kvm/ --include-headers --in-place -*/ - -@@ -constant r; -@@ -- __ctxt_sys_reg(&vcpu->arch.ctxt, r) -+ &__vcpu_sys_reg(vcpu, r) - -@@ -identifier r; -@@ -- vcpu->arch.ctxt.regs.r -+ vcpu_gp_regs(vcpu)->r - -@@ -identifier r; -@@ -- vcpu->arch.ctxt.fp_regs.r -+ vcpu_fp_regs(vcpu)->r - -@@ -identifier r; -fresh identifier accessor = "vcpu_" ## r; -@@ -- &vcpu->arch.ctxt.r -+ accessor(vcpu) - -@@ -identifier r; -fresh identifier accessor = "vcpu_" ## r; -@@ -- vcpu->arch.ctxt.r -+ *accessor(vcpu) - -// \ No newline at end of file diff --git a/cocci_refactor/use_hypstate.cocci b/cocci_refactor/use_hypstate.cocci deleted file mode 100644 index f685149de748..000000000000 --- a/cocci_refactor/use_hypstate.cocci +++ /dev/null @@ -1,63 +0,0 @@ -// - -/* -FILES="$(find arch/arm64/kvm/hyp -name "*.[ch]" ! -name "debug-sr*") arch/arm64/include/asm/kvm_hyp.h" -spatch --sp-file use_hypstate.cocci $FILES --in-place -*/ - - -@remove_vcpu_hyps@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier func; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct vcpu_hyp_state *hyps -, ...) { -- struct vcpu_hyp_state *hyps_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } -} - -@@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier func = remove_vcpu_hyps.func; -@@ -func( -- vcpu -+ hyps - , ...) - -@remove_vcpu_hyps_ctxt@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -identifier func; -@@ -func( -- struct kvm_vcpu *vcpu -+ struct vcpu_hyp_state *hyps -, ...) { -- struct vcpu_hyp_state *hyps_remove = ...; -- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - when != if (...) { <+...vcpu...+> } - when != ctxt_remove - when != if (...) { <+...ctxt_remove...+> } -} - -@@ -identifier vcpu; -fresh identifier hyps = vcpu ## "_hyps"; -identifier func = remove_vcpu_hyps_ctxt.func; -@@ -func( -- vcpu -+ hyps - , ...) - -// diff --git a/cocci_refactor/vcpu_arch_ctxt.cocci b/cocci_refactor/vcpu_arch_ctxt.cocci deleted file mode 100644 index 69b3a000de4e..000000000000 --- a/cocci_refactor/vcpu_arch_ctxt.cocci +++ /dev/null @@ -1,13 +0,0 @@ -// spatch --sp-file vcpu_arch_ctxt.cocci --no-includes --include-headers --dir arch/arm64 - -// -@@ -identifier vcpu; -@@ -( -- vcpu->arch.ctxt.regs -+ vcpu_gp_regs(vcpu) -| -- vcpu->arch.ctxt.fp_regs -+ vcpu_fp_regs(vcpu) -) diff --git a/cocci_refactor/vcpu_declr.cocci b/cocci_refactor/vcpu_declr.cocci deleted file mode 100644 index 59cd46bd6b2d..000000000000 --- a/cocci_refactor/vcpu_declr.cocci +++ /dev/null @@ -1,59 +0,0 @@ - -/* -FILES="$(find arch/arm64 -name "*.[ch]") include/kvm/arm_hypercalls.h"; spatch --sp-file vcpu_declr.cocci $FILES --in-place -*/ - -// - -@@ -identifier vcpu; -expression E; -@@ -<... -- struct kvm_vcpu *vcpu; -+ struct kvm_vcpu *vcpu = E; - -- vcpu = E; -...> - - -/* -@@ -identifier vcpu; -identifier f1, f2; -@@ -f1(...) -{ -- struct kvm_vcpu *vcpu = NULL; -+ struct kvm_vcpu *vcpu; -... when != f2(..., vcpu, ...) -} -*/ - -/* -@find_after@ -identifier vcpu; -position p; -identifier f; -@@ -<... - struct kvm_vcpu *vcpu@p; - ... when != vcpu = ...; - f(..., vcpu, ...); -...> - -@@ -identifier vcpu; -expression E; -position p != find_after.p; -@@ -<... -- struct kvm_vcpu *vcpu@p; -+ struct kvm_vcpu *vcpu = E; - ... -- vcpu = E; -...> - -*/ - -// diff --git a/cocci_refactor/vcpu_flags.cocci b/cocci_refactor/vcpu_flags.cocci deleted file mode 100644 index 609bb7bd7bd0..000000000000 --- a/cocci_refactor/vcpu_flags.cocci +++ /dev/null @@ -1,10 +0,0 @@ -// spatch --sp-file el2_def_flags.cocci --no-includes --include-headers --dir arch/arm64 - -// -@@ -expression vcpu; -@@ - -- vcpu->arch.flags -+ vcpu_flags(vcpu) -// \ No newline at end of file diff --git a/cocci_refactor/vcpu_hyp_accessors.cocci b/cocci_refactor/vcpu_hyp_accessors.cocci deleted file mode 100644 index 506b56f7216f..000000000000 --- a/cocci_refactor/vcpu_hyp_accessors.cocci +++ /dev/null @@ -1,35 +0,0 @@ -// - -/* -spatch --sp-file vcpu_hyp_accessors.cocci --dir arch/arm64 --include-headers --in-place -*/ - -@find_defines@ -identifier macro; -identifier vcpu; -position p; -@@ -#define macro(vcpu) vcpu@p - -@@ -identifier vcpu; -position p != find_defines.p; -@@ -( -- vcpu@p->arch.hcr_el2 -+ vcpu_hcr_el2(vcpu) -| -- vcpu@p->arch.mdcr_el2 -+ vcpu_mdcr_el2(vcpu) -| -- vcpu@p->arch.vsesr_el2 -+ vcpu_vsesr_el2(vcpu) -| -- vcpu@p->arch.fault -+ vcpu_fault(vcpu) -| -- vcpu@p->arch.flags -+ vcpu_flags(vcpu) -) - -// diff --git a/cocci_refactor/vcpu_hyp_state.cocci b/cocci_refactor/vcpu_hyp_state.cocci deleted file mode 100644 index 3005a6f11871..000000000000 --- a/cocci_refactor/vcpu_hyp_state.cocci +++ /dev/null @@ -1,30 +0,0 @@ -// - -// spatch --sp-file vcpu_hyp_state.cocci --no-includes --include-headers --dir arch/arm64 --very-quiet --in-place - -@@ -expression vcpu; -@@ -- vcpu->arch. -+ vcpu->arch.hyp_state. -( - hcr_el2 -| - mdcr_el2 -| - vsesr_el2 -| - fault -| - flags -| - sysregs_loaded_on_cpu -) - -@@ -identifier arch; -@@ -- arch.fault -+ arch.hyp_state.fault - -// \ No newline at end of file diff --git a/cocci_refactor/vgic3_cpu.cocci b/cocci_refactor/vgic3_cpu.cocci deleted file mode 100644 index f7495b2e49cb..000000000000 --- a/cocci_refactor/vgic3_cpu.cocci +++ /dev/null @@ -1,118 +0,0 @@ -// - -/* -spatch --sp-file vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place -*/ - - -@@ -identifier vcpu; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -@@ -( -- kvm_vcpu_sys_get_rt -+ kvm_hyp_state_sys_get_rt -| -- kvm_vcpu_get_esr -+ kvm_hyp_state_get_esr -) -- (vcpu) -+ (vcpu_hyps) - -@add_cpu_if@ -identifier func; -identifier c; -@@ -int func( -- struct kvm_vcpu *vcpu -+ struct vgic_v3_cpu_if *cpu_if - , ...) -{ -<+... -- vcpu->arch.vgic_cpu.vgic_v3.c -+ cpu_if->c -...+> -} - -@@ -identifier func = add_cpu_if.func; -@@ - func( -- vcpu -+ cpu_if - , ... - ) - - -@add_vgic_ctxt_hyps@ -identifier func; -@@ -void func( -- struct kvm_vcpu *vcpu -+ struct vgic_v3_cpu_if *cpu_if, struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps - , ...) { -?- struct vcpu_hyp_state *vcpu_hyps = ...; -?- struct kvm_cpu_context *vcpu_ctxt = ...; - ... - } - -@@ -identifier func = add_vgic_ctxt_hyps.func; -@@ - func( -- vcpu, -+ cpu_if, vcpu_ctxt, vcpu_hyps, - ... - ) - - -@find_calls@ -identifier fn; -type a, b; -@@ -- void (*fn)(struct kvm_vcpu *, a, b); -+ void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, struct vcpu_hyp_state *, a, b); - -@@ -identifier fn = find_calls.fn; -identifier a, b; -@@ -- fn(vcpu, a, b); -+ fn(cpu_if, vcpu_ctxt, vcpu_hyps, a, b); - -@@ -@@ -int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { -+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; -... -} - -@remove@ -identifier func; -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier hyps_remove; -identifier ctxt_remove; -@@ -func(..., -- struct kvm_vcpu *vcpu -+ struct kvm_cpu_context *vcpu_ctxt, struct vcpu_hyp_state *vcpu_hyps -,...) { -?- struct vcpu_hyp_state *hyps_remove = ...; -?- struct kvm_cpu_context *ctxt_remove = ...; -... when != vcpu - } - -@@ -identifier vcpu; -fresh identifier vcpu_ctxt = vcpu ## "_ctxt"; -fresh identifier vcpu_hyps = vcpu ## "_hyps"; -identifier remove.func; -@@ - func( -- vcpu -+ vcpu_ctxt, vcpu_hyps - , ...) - -// From patchwork Fri Sep 24 12:53:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B14AC433F5 for ; Fri, 24 Sep 2021 13:24:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2590A6105A for ; Fri, 24 Sep 2021 13:24:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345971AbhIXN0E (ORCPT ); Fri, 24 Sep 2021 09:26:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345752AbhIXNZU (ORCPT ); Fri, 24 Sep 2021 09:25:20 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DED50C03402E for ; Fri, 24 Sep 2021 05:55:05 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id x7-20020a5d6507000000b0015dada209b1so7960217wru.15 for ; Fri, 24 Sep 2021 05:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pSU+osLY3SNvsXyTiQYlyo1U/eLzGKvNKDW/e+/rj1k=; b=Xi9NRBx5XZozH8rB7huHuBAd6SihWjX6Jrjnf64+NQTSXskvNul2GqBxfiIf7CNmuT 0iKOKMJ44cK4ZU6Rwz1/SGVVEFbq8t5ri3zkvZ2XFXGbIfk1wobzhlceO01k5FRItErx 5m5N8vhxsF9ZucNOiJrzQxW+YlmxiRqOMNbZajajDFlgvy9PcYT3txzcp1UEEK+HymzZ mZ88QWv7zrABazvTZ2E7mEGuKfuhe+OlDRgTvEyth3hWEwU/z3vdsXhBin4ahImA9ziu kvrQd+XmUbwiD6UW0Cszh1IwSROmmsu/e32zONjnwXCme8AzBqfD3Btll6it+ZzXJu+X bzoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pSU+osLY3SNvsXyTiQYlyo1U/eLzGKvNKDW/e+/rj1k=; b=LoCIDBiR1/U7FzRnTsCk7T4RsL2w+KsLalWkac4fxkR/qzVqtompAtE4bEhEvd7M/c DJLq8X+k6NGaCoCXk0GlNRK/NTOz+lSknBdizUKxbgcNhM5td5+WOctcay0djOUC4GbZ yuu9sfE4nbfFemnpVpFaRX8Vcm/LVOvDfDrwPplfqYxJNcX/Ms09cjNykbjvD2ocS2M1 6LsoWNpzTEbuOZ71zXaWQevp6Y9jMwyFmcHJ3MCBWTys9FyuaEEOn5vtVFU43fzh9zUn i3YwQhg8F+wOlzMWY1pEGP15Tl425RHh3QxngZ4EZzfMQzaADioNpBOAvbyTvSZKs5Kf RKCA== X-Gm-Message-State: AOAM5308KGHkGO/+OJBvWqycKStCE/PjxGrufLgaeCmlByn64VPr4bJB Ir2EU3gFD0VpgGoRR6M55/dmqGD4pQ== X-Google-Smtp-Source: ABdhPJyxAytgyRlwjG9Opamr3QJB1iEdrxd/ybpglcgq8dr0JcPPiKb6K3AtAV5QDJY8zF5YOBaDzRTtbA== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a1c:a757:: with SMTP id q84mr1960176wme.26.1632488104470; Fri, 24 Sep 2021 05:55:04 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:59 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-31-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 30/30] [DONOTMERGE] Re-enable warnings From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 0278bd28bd97..ed669b2d705d 100644 --- a/Makefile +++ b/Makefile @@ -504,7 +504,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ -Werror=implicit-function-declaration -Werror=implicit-int \ -Werror=return-type -Wno-format-security \ - -std=gnu89 -Wno-unused-variable -Wno-unused-function + -std=gnu89 KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_AFLAGS_KERNEL := KBUILD_CFLAGS_KERNEL :=