From patchwork Fri Apr 15 21:58:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 12815474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11DF0C433F5 for ; Fri, 15 Apr 2022 22:06:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kNs0vOJevm5gDHTEwTXXtr22xLFwZdw8EN4+A0VPkBM=; b=yNcVzHzaRB8y9jo1ykwm5BLaux D0usJ1xxOvfOGvZCXdlLz3EPXnVBg9UI8Y3dpp1kZd8qwK7P+vajC9W3NT0wX+gY+JkeSHy69gHXJ 1zsPWL7QoKV7BDgmvDuIvxOI3lRMIPYV0rfmgYU9bxAPJENlZlfUQ6v7JquaUIMjP9fYeeuTaGFLL j6mF8OtViPpkad0nWEXZXYYkxqaItHST5Jw70p+qm8BUifB3z67xrnly/Quj2/RGLfodg4zVzdmqy mDub62PrqQe+G5hCIm5NhQqJb053Ervor0Vp1H4GQYVxwhpF29v/weR3NQfIOe2HcX5dO34wRgIgp 2uij6jVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfU3i-00BUbF-ED; Fri, 15 Apr 2022 22:04:54 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfTyL-00BS6K-Ve for linux-arm-kernel@lists.infradead.org; Fri, 15 Apr 2022 21:59:24 +0000 Received: by mail-il1-x14a.google.com with SMTP id k2-20020a056e02134200b002caaa88e702so5454925ilr.0 for ; Fri, 15 Apr 2022 14:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+7THeEZjrQkI4cL8qs69/ZVQNerdV4AViBcRVZLLx0E=; b=sgJWtGYVInFd1drSIPRRI4aezVykNPn0h3TZjKKiErpQM7sAIHOOKNGcQ0jWe9x2J3 4uorM5vPe/np9qjkIYA4dfw4gtmkWWOabaU8tGatq8w8LsDoUg9+FMK0LVF1fX+QFyw6 i9msKz6hKpt7r66C+xns81BxvuHbanyJZ610bbwgkWkEmS6Xl9AcIZ1z+pnbftljqbuV TZJnMR1dtR7pvNgtOgZtS1KwcCNK2XXzdflPkGu6sg9JhoQgHweTSx9tVIIm2620GETp v/lovjRa15qTioUPiP/HsnkVKkyBFJUAm8nxrDqS7jgG1Smlaj2NKciPmkptKeLE+kdI nEKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+7THeEZjrQkI4cL8qs69/ZVQNerdV4AViBcRVZLLx0E=; b=KsvMDnllk2qQKZBvZoT8+Eq5JTjnTliZY+/Y/AwhLQ6fZ2w7L2ZdfsGwtequC2yhio bj/zh0H/fL/dlJpelX3yoYd34yffox21WJ5fQLtyocbLcZpn3WdvF3ez0mIw4Pi0x5XY 8NXGANThwYg1qVqhnja/fHbpgIP6MPr18xCyO7aru2Vh9Yri/+uDXCUbdywZEHdPi2wI l4eQcjtHm/o/FnL0C9IseGcrmjtYTk6iezQKx/AJ2w/NQ2NzqE/GiweCw/uKY4WZg7U8 ZD/Kb9JAXCcPbwP3OGoZvMbf4j6jv5RT0Xkerf7CjPq+oTGREjDT4wB/hr7FIVxyZ+23 Wk2A== X-Gm-Message-State: AOAM5315TuERX3/vOXqCAQfu8fJ7TwQH/Rl5xla5M1Ckl0nEXze8s9CE L01vGkYLdE0JHTO3m1JyiqZBKLpexlY= X-Google-Smtp-Source: ABdhPJyV71KyX7wIk4SdepWHYsPDO7JEsMOtpP8Ew2zkdCBFEtHJx6sDDgdROjEilaiKBOX2bMTgAgDq5z4= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a5e:8a07:0:b0:64c:8b33:6d19 with SMTP id d7-20020a5e8a07000000b0064c8b336d19mr315910iok.170.1650059960675; Fri, 15 Apr 2022 14:59:20 -0700 (PDT) Date: Fri, 15 Apr 2022 21:58:57 +0000 In-Reply-To: <20220415215901.1737897-1-oupton@google.com> Message-Id: <20220415215901.1737897-14-oupton@google.com> Mime-Version: 1.0 References: <20220415215901.1737897-1-oupton@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [RFC PATCH 13/17] KVM: arm64: Setup cache for stage2 page headers From: Oliver Upton To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Peter Shier , Ricardo Koller , Reiji Watanabe , Paolo Bonzini , Sean Christopherson , Ben Gardon , David Matlack , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220415_145922_076724_6062C6EA X-CRM114-Status: GOOD ( 15.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to punt the last reference drop on a page to an RCU synchronization we need to get a pointer to the page to handle the callback. Set up a memcache for stage2 page headers, but do nothing with it for now. Note that the kmem_cache is never destoyed as it is currently not possible to build KVM/arm64 as a module. Signed-off-by: Oliver Upton --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c8947597a619..a640d015790e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -374,6 +374,7 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_caches { struct kvm_mmu_memory_cache page_cache; + struct kvm_mmu_memory_cache header_cache; } mmu_caches; /* Target CPU and feature flags */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7a588928740a..cc6ed6b06ec2 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -31,6 +31,12 @@ static phys_addr_t hyp_idmap_vector; static unsigned long io_map_base; +static struct kmem_cache *stage2_page_header_cache; + +struct stage2_page_header { + struct rcu_head rcu_head; + struct page *page; +}; /* * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, @@ -1164,6 +1170,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_mmu_cache_min_pages(kvm)); if (ret) return ret; + + ret = kvm_mmu_topup_memory_cache(&mmu_caches->header_cache, + kvm_mmu_cache_min_pages(kvm)); + if (ret) + return ret; } mmu_seq = vcpu->kvm->mmu_notifier_seq; @@ -1589,6 +1600,13 @@ int kvm_mmu_init(u32 *hyp_va_bits) if (err) goto out_destroy_pgtable; + stage2_page_header_cache = kmem_cache_create("stage2_page_header", + sizeof(struct stage2_page_header), + 0, SLAB_ACCOUNT, NULL); + + if (!stage2_page_header_cache) + goto out_destroy_pgtable; + io_map_base = hyp_idmap_start; return 0; @@ -1604,11 +1622,13 @@ int kvm_mmu_init(u32 *hyp_va_bits) void kvm_mmu_vcpu_init(struct kvm_vcpu *vcpu) { vcpu->arch.mmu_caches.page_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_caches.header_cache.kmem_cache = stage2_page_header_cache; } void kvm_mmu_vcpu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_caches.page_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_caches.header_cache); } void kvm_arch_commit_memory_region(struct kvm *kvm,