From patchwork Fri Apr 15 21:58:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 12815473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A0A9CC433F5 for ; Fri, 15 Apr 2022 22:05:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JGq0Vq+iHYYWdOMKGmZvWQRuzsrjhO3OVa9PP46YANw=; b=j/htdTe1aIbsIX3rMWTlQ0Xubj 2+m0pg/57ZLMLOGzmCcHsqNoOLPw/pP+vZD8crSKn0awORpWB+BFXWEPx3GNT8xT9IJ6GsjEdoRwx GANm0rkB3vK/1JsCfSzd8hVPD2Dl07gztZMatJHpAeW1R84+8026wGztVmuq5P9ZLZf/1bG+kmsgw z6WdhH42QLVKO4/T61Pnn0Zp/UfNxuJSOEly2VHojO6LFoTCrnnYzxfnvhJKJCd3pcujoPkfu0341 Vc5tShZNqPWdXwNSym9jpHHK0RRuNO2RWUuZsDZOu6VzNUWineMAXZdCvdUMElkXeHGSRwgR+p+/j cGczSWjw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfU2v-00BUGQ-Of; Fri, 15 Apr 2022 22:04:06 +0000 Received: from mail-oo1-xc49.google.com ([2607:f8b0:4864:20::c49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfTyL-00BS5k-Lf for linux-arm-kernel@lists.infradead.org; Fri, 15 Apr 2022 21:59:23 +0000 Received: by mail-oo1-xc49.google.com with SMTP id d3-20020a4aeb83000000b00324f07ebc76so4906799ooj.0 for ; Fri, 15 Apr 2022 14:59:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=22i24DLcyXUieSnW3PAwQCkjoUnvyMWdp2sKJnAUF64=; b=VPtHgLB2eymVJQjQSElXFQ0Zv2gLUgxLry175CS0FCPH1QdUk/jXMMBoPOrHmGBwoq eWdmwHWefSkyKrTkoNNUmWeZAwfQOesyHQ/c4F+nNUh1RnLeH6LvowZHB8nX3nwQ6l3L GEUKEunrQWJsjT9PH9meExTOYNHH3hU7S6IfE1PLsiTpOqRRkTbel/mLpuqamcBbNfG9 GoFAe3BQFqD1GHm76/Rdvr36nNdmSstr6rfNRpwHof4KzfU+AjgBiLREY1NzMY207hNQ sxhnRfFngtq2kuNUX0v2DKU0e2znK8YIUUSoecH7YVoHWITRrIMv7LGu6/DrG+fBMRUD Iqsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=22i24DLcyXUieSnW3PAwQCkjoUnvyMWdp2sKJnAUF64=; b=5FrP08VIUsvgeDBv8nqRrbM57GefKvHz4x3GZ3wFnCyPfLX6Y4pExQ5DB1ynffGgjn vQJLGTjvddJ5zXp6Pe413Hi0aeOU3sr+p9qd8ooSNuUNJacPjwj4B5A7isBnDJMU4ePV 4Gt/hIY9F0HDcSQ+ZEt0QEYEZXH2YI57W05+SQFX+fj00B4jQ7WcvoxyG+/ulVNrWPlz u3UyesERTxmIJ2DfR1YCQ2TGW280GJIJFyKti8mgPoY1U30cEPguPT8+LZ0queve4j9y RuYELBYxDS0MvlKzqNrS7MqcmqcZdO8yaHskVNxoZQrejVZFOoVArKZzekw77H/pqYZ6 +EDg== X-Gm-Message-State: AOAM5338zO6U2H+EQImVGVHXU6pUp5lzq/b4aXH9cvH8y1YTfD18ogQg LQDNmpmyfKF7GdDW2tR6N0nmnzat+Eg= X-Google-Smtp-Source: ABdhPJya0hEPL3+v3IGSw82KDihmFFDLjFiBSt27i0G11hzjmjLYmOhkEY2+c79aK0bz4MZlGmwNEBfJ24k= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6870:639e:b0:e2:ab7c:d868 with SMTP id t30-20020a056870639e00b000e2ab7cd868mr373366oap.108.1650059959809; Fri, 15 Apr 2022 14:59:19 -0700 (PDT) Date: Fri, 15 Apr 2022 21:58:56 +0000 In-Reply-To: <20220415215901.1737897-1-oupton@google.com> Message-Id: <20220415215901.1737897-13-oupton@google.com> Mime-Version: 1.0 References: <20220415215901.1737897-1-oupton@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [RFC PATCH 12/17] KVM: arm64: Stuff mmu page cache in sub struct From: Oliver Upton To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Peter Shier , Ricardo Koller , Reiji Watanabe , Paolo Bonzini , Sean Christopherson , Ben Gardon , David Matlack , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220415_145921_751740_BFBF4B53 X-CRM114-Status: GOOD ( 15.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We're about to add another mmu cache. Stuff the current one in a sub struct so its easier to pass them all to ->zalloc_page(). No functional change intended. Signed-off-by: Oliver Upton --- arch/arm64/include/asm/kvm_host.h | 4 +++- arch/arm64/kvm/mmu.c | 14 +++++++------- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 94a27a7520f4..c8947597a619 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -372,7 +372,9 @@ struct kvm_vcpu_arch { bool pause; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_caches { + struct kvm_mmu_memory_cache page_cache; + } mmu_caches; /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f29d5179196b..7a588928740a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -91,10 +91,10 @@ static bool kvm_is_device_pfn(unsigned long pfn) static void *stage2_memcache_zalloc_page(void *arg) { - struct kvm_mmu_memory_cache *mc = arg; + struct kvm_mmu_caches *mmu_caches = arg; /* Allocated with __GFP_ZERO, so no need to zero */ - return kvm_mmu_memory_cache_alloc(mc); + return kvm_mmu_memory_cache_alloc(&mmu_caches->page_cache); } static void *kvm_host_zalloc_pages_exact(size_t size) @@ -1073,7 +1073,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_caches *mmu_caches = &vcpu->arch.mmu_caches; struct vm_area_struct *vma; short vma_shift; gfn_t gfn; @@ -1160,7 +1160,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (fault_status != FSC_PERM || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, + ret = kvm_mmu_topup_memory_cache(&mmu_caches->page_cache, kvm_mmu_cache_min_pages(kvm)); if (ret) return ret; @@ -1273,7 +1273,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache); + mmu_caches); } /* Mark the page dirty only if the fault is handled successfully */ @@ -1603,12 +1603,12 @@ int kvm_mmu_init(u32 *hyp_va_bits) void kvm_mmu_vcpu_init(struct kvm_vcpu *vcpu) { - vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_caches.page_cache.gfp_zero = __GFP_ZERO; } void kvm_mmu_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_caches.page_cache); } void kvm_arch_commit_memory_region(struct kvm *kvm,