From patchwork Fri Apr 15 21:59:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 12815478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51F34C433EF for ; Fri, 15 Apr 2022 22:09:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gamzmLV2uBZhRKMmOc6yIB/A2y3Wbujf/0hDhA8LKwY=; b=TGjimBNqusHIaG6cuU3+/CFTtW jPMtBPb4GbZdp3TnzF1q4JVrddwYFw+VqyKul4CS5UmDof4dnDFzhJW3JFrwULs1KST3r1FB3Z+gv 6E6OYFVfaZbfctfK4lShTCqCTuGGkDLhp8f+2npXAIeDJZBrlHQIlgE4+VBAk8vBudURpgn73HUyb mCHjSJ8ceTP4lv5OXRzPsNbmE2a/acpu+ZjQX8v5eYb9dGfXbWrw6lmPfxfhuLLFVUs9CpJ6EFSB1 grpS8b2FsTwDpFFhk9BGitkLaEZ8EVIw8mP8vrKql8qCtZcG5jmSCIPK1iJUVDDpS0rU8z7a172fN 9RXQAsSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfU6V-00BVfJ-4k; Fri, 15 Apr 2022 22:07:47 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfTyP-00BS8a-NI for linux-arm-kernel@lists.infradead.org; Fri, 15 Apr 2022 21:59:27 +0000 Received: by mail-il1-x149.google.com with SMTP id v14-20020a056e020f8e00b002caa6a5d918so5417574ilo.15 for ; Fri, 15 Apr 2022 14:59:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jzvc81PBGj4bV/jHUjtzPtFNqWylMffaehwRBxD6iIE=; b=aWkRKsBqnouPuu0iRgQFmbdZNwM6u65S9STgd8WzLazymunDNq+F/2LV041lFacxkX sH2au5CIL4U6btbR36TcgERHcBL5JA8T93pnN1zy/vCBpJyTdEx4s8aDOU+JwJD7FEgH EDuCMiJ2lKlmqCfo2HCyFA8YBpzh+0As1Im7YSg7MivXDgsyltO1HdVbeLv+yGFmNIPi O5p8sXbUH14m1SBo2J0fuW1TDCouol/p3zaCmN7Pb1tclO6RjKEsMyqqJ0eOLp38uD1h AbzgBBIOOsbBeQk1WFRiKNO/0k6ugAlDvMy5k+FqVLVzTLXvQl7QmS1J6MjDvDSfP9W4 9n1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jzvc81PBGj4bV/jHUjtzPtFNqWylMffaehwRBxD6iIE=; b=Jxpj2AFmZwekidIXMe5o4oxcgR4/DIs1II+plSzq3k9NWIuy8jln3LrTl82606u9Wv tETXGV3p/CQ4c2nwuKEvKgzTCxVA/i3O40bOmLKX0hZ6LjtLJ7XvSfQR2buwxRGiZgX2 TDAYLOqKIoA5IAfInlnlxrnJaD4TYEZh4kzce99suVAaG6PwUf47jSh/Zpc2ZAFQHTN3 9cKe6SVl2BR9WHwt3+4PasMnEuZ6WpbXrv2mqQW6OD1j097+uyh9xKB/Y6QIpQZcB9Fn nMPmqtfPb6e58h34rec1wz31eX6jj0TZzxCIUu/MTk9DgHsffPbXdpWGFWk5BrNaDN2y h4gQ== X-Gm-Message-State: AOAM532mkctJKxiWtH06SjJQ9gbkoZYo83P9R/389mROLfHOU2LL60HL ZuFXmsTWTT4qJLMBDCY132sbZ/xyAsU= X-Google-Smtp-Source: ABdhPJy0T0D9tPSOS2fkDjAWGdXFAsgYKiBOBw6Vv9SkS2qbIf2KOV78tziChH3+cjdCnnpihxhBubRLicI= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6e02:1b0f:b0:2c7:9a3f:6e84 with SMTP id i15-20020a056e021b0f00b002c79a3f6e84mr312271ilv.32.1650059964470; Fri, 15 Apr 2022 14:59:24 -0700 (PDT) Date: Fri, 15 Apr 2022 21:59:01 +0000 In-Reply-To: <20220415215901.1737897-1-oupton@google.com> Message-Id: <20220415215901.1737897-18-oupton@google.com> Mime-Version: 1.0 References: <20220415215901.1737897-1-oupton@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [RFC PATCH 17/17] TESTONLY: KVM: arm64: Add super lazy accounting of stage 2 table pages From: Oliver Upton To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Peter Shier , Ricardo Koller , Reiji Watanabe , Paolo Bonzini , Sean Christopherson , Ben Gardon , David Matlack , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220415_145925_823052_9045C5F6 X-CRM114-Status: GOOD ( 12.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Don't use this please. I was just being lazy but wanted to make sure tables are all accounted for. There's a race here too, do you see it? :) Signed-off-by: Oliver Upton --- arch/arm64/kvm/mmu.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 2881051c3743..68ea7f0244fe 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -95,6 +95,8 @@ static bool kvm_is_device_pfn(unsigned long pfn) return !pfn_is_map_memory(pfn); } +static atomic_t stage2_pages = ATOMIC_INIT(0); + static void *stage2_memcache_zalloc_page(void *arg) { struct kvm_mmu_caches *mmu_caches = arg; @@ -112,6 +114,8 @@ static void *stage2_memcache_zalloc_page(void *arg) return NULL; } + atomic_inc(&stage2_pages); + hdr->page = virt_to_page(addr); set_page_private(hdr->page, (unsigned long)hdr); return addr; @@ -121,6 +125,8 @@ static void stage2_free_page_now(struct stage2_page_header *hdr) { WARN_ON(page_ref_count(hdr->page) != 1); + atomic_dec(&stage2_pages); + __free_page(hdr->page); kmem_cache_free(stage2_page_header_cache, hdr); } @@ -662,6 +668,8 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { .icache_inval_pou = invalidate_icache_guest_page, }; +static atomic_t stage2_mmus = ATOMIC_INIT(0); + /** * kvm_init_stage2_mmu - Initialise a S2 MMU structure * @kvm: The pointer to the KVM structure @@ -699,6 +707,8 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; + atomic_inc(&stage2_mmus); + mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); return 0; @@ -796,6 +806,9 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) kvm_pgtable_stage2_destroy(pgt); kfree(pgt); } + + if (atomic_dec_and_test(&stage2_mmus)) + WARN_ON(atomic_read(&stage2_pages)); } /**