From patchwork Fri Sep 24 12:53:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 12515311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBD33C433FE for ; Fri, 24 Sep 2021 13:04:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5BFC610CB for ; Fri, 24 Sep 2021 13:04:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5BFC610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qvWm8uktr46Utx+pCqf4OhPlGTxf2aexdzkcLL7SOOg=; b=dPozA8RptAjyaTLgbvIXR0TZMP FUyl60JWy4Se2wyUhphJq9XpBBealMgp+FuR8yd8iu7kQH5viHLUPSYYYnE+Dk0DwRGGQtjevBnWD MtqjlCl/w87NptIURNSxZgot+fHWBLphRDZ9NZuRVURME4JyjnV3MuyaW0d81srR8AWlbBj7qFShp YU8hlrnFmIQIN4jeJ4msGwOc/YZdPCSwO/fjC7YKjOLj/rqfTORIELkM44BgputHNkIHNHoN2LQz3 ISDt/zex9CYyJrRUQmSNe2ZSeFhEXSH66yOIFB50LLiFUcLE3AkNYzzPQqAZkd1PECn7soVef4w30 dgndxe3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkqq-00EQ39-Ju; Fri, 24 Sep 2021 13:02:53 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mTkip-00EMA0-91 for linux-arm-kernel@lists.infradead.org; Fri, 24 Sep 2021 12:54:38 +0000 Received: by mail-qk1-x749.google.com with SMTP id ay30-20020a05620a179e00b00433294fbf97so32424253qkb.3 for ; Fri, 24 Sep 2021 05:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=TexyxHJdUF1BKJ7n5bPF6ww67h6lEEzkA/FZiA0Gdq52o1XP4F9AnYzrMULrZjPCLZ vFKXDbAPKDTbWQQ1UjQrquLyqY8CWeKRdvSPk7EpWN8QlKGB63nXMTwW9QD8Cb3rwAoA D2v4S87wY6lsjeOwQYJK8yV+YDdG2sAlINVWFov6UWfLKHkM3tkFVbyoyWmP2SRCupQa 7P3pwD5TDmIS4etf6pUSwQrsDQvM+HfDNSJ+LFWzcjNrqMqYrWVF/JnKLpC0Olhyc2xr PJ18FhnSNtm7gDIt6gNigyky/YyG7q1zBbVTQV2f+Qapp7lQgVnvbL5WjFFN8sNoHAA8 tdRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rJLpdf6u1WB31SB8LdZpMEp1sCxrtW/YQMF2TRdpoIw=; b=dIJtcrL3zPAB6U8Yo0/n63p5KLlwmV13ls3HvOygwXKtoVuwcgq84dklHC5lKkVwzS zwc97GOlO+iOudfPxjo8Hg4qHrHIAxh6Pd1/ogf2rLFfKzh/ggr2NPmCQyYqR1vG0Wue fwq4n6+hO4GiM5JzSSc3LTa0eSAaTwvcZgkZyf/GIfhEzzzpfxwIk/eSzKFLyMDSQ+d/ Dam2xdc86+BoIRblS7ZN+dpKVepZ/HTpeUZg3Wi2wuFbhh8sP+dESXif/ozK2yH1Rzdq 8JQy9+wnCHlSaRXMayDEdA9qbPfhkMQozqb6qVDgxH7KMq7pAQUZgKNu2rpFWbOOPgnq dVYw== X-Gm-Message-State: AOAM531JBGBho5I103WlHGN2Mn95ZNEUIzepAs58VSoLpaL7Gtz3Iz4g kKO2zxrltYpwvHCSc1SlG6S8L2rYMQ== X-Google-Smtp-Source: ABdhPJwvUqf3HFnKY+dtVGjbluhQ8kvmArVd8MPsfCPGMinMVDjnOArEBYtnbUYdJdOADNOZ9fnGT01pEw== X-Received: from tabba.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:482]) (user=tabba job=sendgmr) by 2002:a05:6214:1243:: with SMTP id q3mr9801960qvv.0.1632488073615; Fri, 24 Sep 2021 05:54:33 -0700 (PDT) Date: Fri, 24 Sep 2021 13:53:44 +0100 In-Reply-To: <20210924125359.2587041-1-tabba@google.com> Message-Id: <20210924125359.2587041-16-tabba@google.com> Mime-Version: 1.0 References: <20210924125359.2587041-1-tabba@google.com> X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [RFC PATCH v1 15/30] KVM: arm64: COCCI: vgic3_cpu.cocci: reduce scope of vgic v3 From: Fuad Tabba To: kvmarm@lists.cs.columbia.edu Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210924_055435_404141_102D22B5 X-CRM114-Status: GOOD ( 14.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vgic v3 interface functions are passed vcpu, when the state that they need is the vgic interface, as well as the kvm_cpu_context and the recently created vcpu_hyp_state. Reduce the scope of its interface functions to these structs. This applies the semantic patch with the following command: spatch --sp-file cocci_refactor/vgic3_cpu.cocci arch/arm64/kvm/hyp/vgic-v3-sr.c --in-place Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/vgic-v3-sr.c | 247 ++++++++++++++++++-------------- 1 file changed, 137 insertions(+), 110 deletions(-) diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index d025a5830dcc..3e1951b04fce 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -471,11 +471,10 @@ static int __vgic_v3_bpr_min(void) return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - u32 esr = kvm_vcpu_get_esr(vcpu); + u32 esr = kvm_hyp_state_get_esr(vcpu_hyps); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; return crm != 8; @@ -483,10 +482,11 @@ static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, +static int __vgic_v3_highest_priority_lr(struct vgic_v3_cpu_if *cpu_if, + u32 vmcr, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; u8 priority = GICv3_IDLE_PRIORITY; int i, lr = -1; @@ -522,10 +522,10 @@ static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, return lr; } -static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, +static int __vgic_v3_find_active_lr(struct vgic_v3_cpu_if *cpu_if, int intid, u64 *lr_val) { - unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs; + unsigned int used_lrs = cpu_if->used_lrs; int i; for (i = 0; i < used_lrs; i++) { @@ -673,17 +673,18 @@ static int __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; u8 lr_prio, pmr; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr < 0) goto spurious; @@ -733,10 +734,11 @@ static void __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; int lr; @@ -749,7 +751,7 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vid >= VGIC_MIN_LPI) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -758,16 +760,17 @@ static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vid = ctxt_get_reg(vcpu_ctxt, rt); u64 lr_val; u8 lr_prio, act_prio; int lr, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); /* Drop priority in any case */ act_prio = __vgic_v3_clear_highest_active_priority(); @@ -780,7 +783,7 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) if (vmcr & ICH_VMCR_EOIM_MASK) return; - lr = __vgic_v3_find_active_lr(vcpu, vid, &lr_val); + lr = __vgic_v3_find_active_lr(cpu_if, vid, &lr_val); if (lr == -1) { __vgic_v3_bump_eoicount(); return; @@ -797,24 +800,27 @@ static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_clear_active_lr(lr, lr_val); } -static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -825,10 +831,11 @@ static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, + u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & 1) @@ -839,24 +846,27 @@ static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); ctxt_set_reg(vcpu_ctxt, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -872,10 +882,11 @@ static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 val = ctxt_get_reg(vcpu_ctxt, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -894,13 +905,14 @@ static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) __vgic_v3_write_vmcr(vmcr); } -static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val; - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) val = __vgic_v3_read_ap0rn(n); else val = __vgic_v3_read_ap1rn(n); @@ -908,86 +920,94 @@ static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, int rt, + int n) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); - if (!__vgic_v3_get_group(vcpu)) + if (!__vgic_v3_get_group(vcpu_ctxt, vcpu_hyps)) __vgic_v3_write_ap0rn(val, n); else __vgic_v3_write_ap1rn(val, n); } -static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 0); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 1); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 2); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_read_apxrn(vcpu, rt, 3); + __vgic_v3_read_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 0); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 0); } -static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 1); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 1); } -static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 2); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 2); } -static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); - __vgic_v3_write_apxrn(vcpu, rt, 3); + __vgic_v3_write_apxrn(cpu_if, vcpu_ctxt, vcpu_hyps, rt, 3); } -static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u64 lr_val; int lr, lr_grp, grp; - grp = __vgic_v3_get_group(vcpu); + grp = __vgic_v3_get_group(vcpu_ctxt, vcpu_hyps); - lr = __vgic_v3_highest_priority_lr(vcpu, vmcr, &lr_val); + lr = __vgic_v3_highest_priority_lr(cpu_if, vmcr, &lr_val); if (lr == -1) goto spurious; @@ -999,19 +1019,21 @@ static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; ctxt_set_reg(vcpu_ctxt, rt, vmcr); } -static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); val <<= ICH_VMCR_PMR_SHIFT; @@ -1022,18 +1044,20 @@ static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = __vgic_v3_get_highest_active_priority(); ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 vtr, val; vtr = read_gicreg(ICH_VTR_EL2); @@ -1053,10 +1077,11 @@ static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) ctxt_set_reg(vcpu_ctxt, rt, val); } -static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct vgic_v3_cpu_if *cpu_if, + struct kvm_cpu_context *vcpu_ctxt, + struct vcpu_hyp_state *vcpu_hyps, u32 vmcr, + int rt) { - struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); - struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); u32 val = ctxt_get_reg(vcpu_ctxt, rt); if (val & ICC_CTLR_EL1_CBPR_MASK) @@ -1074,16 +1099,18 @@ static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; struct vcpu_hyp_state *vcpu_hyps = &hyp_state(vcpu); struct kvm_cpu_context *vcpu_ctxt = &vcpu_ctxt(vcpu); int rt; u32 esr; u32 vmcr; - void (*fn)(struct kvm_vcpu *, u32, int); + void (*fn)(struct vgic_v3_cpu_if *, struct kvm_cpu_context *, + struct vcpu_hyp_state *, u32, int); bool is_read; u32 sysreg; - esr = kvm_vcpu_get_esr(vcpu); + esr = kvm_hyp_state_get_esr(vcpu_hyps); if (ctxt_mode_is_32bit(vcpu_ctxt)) { if (!kvm_condition_valid(vcpu)) { __kvm_skip_instr(vcpu_ctxt, vcpu_hyps); @@ -1195,8 +1222,8 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) } vmcr = __vgic_v3_read_vmcr(); - rt = kvm_vcpu_sys_get_rt(vcpu); - fn(vcpu, vmcr, rt); + rt = kvm_hyp_state_sys_get_rt(vcpu_hyps); + fn(cpu_if, vcpu_ctxt, vcpu_hyps, vmcr, rt); __kvm_skip_instr(vcpu_ctxt, vcpu_hyps);