From patchwork Mon Apr 1 23:29:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13613119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 84107CD128A for ; Mon, 1 Apr 2024 23:30:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ltpWC3FkUjfHKcvAshARlfpbHFicMYLaXgWpbBrMvpA=; b=WnUkX94/3s4XTCb+SonLT0CGhq DFTKoWbzzG/HkqWEixwbW39yXJcV2O5ykSN44OD6wZAa0pRh4yQfJ1iF9Za790HdHwOKCDtKbaaiP ekOhpkn9SAq7mXb2JTLJlbR3Vgm1R4Dqd4kBPjaa6WFg0LDb3S0Wli+NRwjLJD+Lukb+a7s9JBi0D 4dOKDVVJGDUH8P4Cl5WWjS41CzEAaqir7Abm0ANelmJwm8o50fmQ8nHwFp4rwHFdVn+qkHsOuH/OO DNPXcNMrj76PtuiWsOd8hKabmoMo7aRQKiDQ5VyJlA8vaT+/d6hgMgYXb4Dg5nIvx9Twc/Jb77euK ekRUUuoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrR6Y-000000097W8-30yX; Mon, 01 Apr 2024 23:30:18 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrR6I-000000097HF-3tv3 for linux-arm-kernel@lists.infradead.org; Mon, 01 Apr 2024 23:30:05 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-60cd041665bso81645707b3.0 for ; Mon, 01 Apr 2024 16:30:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712014202; x=1712619002; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CifQvULm5VcvU50LwCct+JTf2gCEq9urL1yip38iv3E=; b=nufrx+9yCsVzqX2sjbDtEF3oowKYX52hk0NUeuGyFYMCi78oln7URASMNl0IXXT7qg XD2afkB0cUo7nLwnYkVGCrpl68Mx34zyy1dNrB0qXgN11qndtbOvTUEjjyR0NPe6CKug t0CbTPKkwCiY3VUkaxEyBTR949Afx+rU1F9a+eW3tbbJdbN6PSYNkQnQaSoXnxs8ZwMJ Tb4Bw7wlfKgAp8h564fQhZ4X4MTRN0/Bc2y1qZoXFbGwv70mH9mycUJEzVQZP57Amqy6 qVC38i0TLELBo/Hld1EAeIudP/wggUx8HL4TELKyRxQuADjP53XgVuSz9GkOZr47f6TT nVXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712014202; x=1712619002; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CifQvULm5VcvU50LwCct+JTf2gCEq9urL1yip38iv3E=; b=hfrwSAx3zu8001gf9iNl8NZiBWykDL+Mi1qqgL4xU+CHVmX7DzZsgac6qxVCtd/tLx 0R65n5PNIBfRZ5C7Bhl9QhS+gXbdk4jVmYnnsUSEtczHgaHHfpaZaBDIFxfXVqVA1UxV 0K6g17o70qhLRGf0lqcUGARJmmJsWT9aLRpaxrKbTm1HwW+102D77sQjhmCilU4xMQxr /wfF/uMst3g0zTISMDy4ZvHO3H6Ra4AoWHMIozVTyMScryFWAvSNpPtXqiqQPLo7IDcN sBcmCB9CztiWJ+IodciLsVsyEoz2EbXC+rVDNtEtG5GltiIIzgMPlF7tcjALPB10nOq+ wdJw== X-Forwarded-Encrypted: i=1; AJvYcCWddxJvrKMFLlnqR2IKy9Q5tkGQy6YFZBNNqt2q2YLvfeO2INiSKZeNyctqvijgDOsdUUTDsoNMSxbcseariovnrxGc4RW+rch099ZJH1rqGGRyyIo= X-Gm-Message-State: AOJu0YyPGlH840n+PjwLo1DStzYMf1R6+rlg/76jYN0r1BnShHd6CVvy V8hDzN4P7gZuvO7sTph2TKD5LtOC9gqJFyUHRi12H+tdaE1v8RAf3ht0VQ9s0efLC7uSeWE40oe ib77XZo3SYVtitR5cdw== X-Google-Smtp-Source: AGHT+IGQAk6u/Sk8y21e9rySBKwdkEDK33GoLWkr+tAsmcolw1O4jbXBJxOpzuSZWzDHw9bmaDUMZf63oZp0WmR7 X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:690c:650e:b0:615:165b:8dde with SMTP id hw14-20020a05690c650e00b00615165b8ddemr268650ywb.10.1712014201787; Mon, 01 Apr 2024 16:30:01 -0700 (PDT) Date: Mon, 1 Apr 2024 23:29:45 +0000 In-Reply-To: <20240401232946.1837665-1-jthoughton@google.com> Mime-Version: 1.0 References: <20240401232946.1837665-1-jthoughton@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401232946.1837665-7-jthoughton@google.com> Subject: [PATCH v3 6/7] KVM: arm64: Participate in bitmap-based PTE aging From: James Houghton To: Andrew Morton , Paolo Bonzini Cc: Yu Zhao , David Matlack , Marc Zyngier , Oliver Upton , Sean Christopherson , Jonathan Corbet , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Shaoqin Huang , Gavin Shan , Ricardo Koller , Raghavendra Rao Ananta , Ryan Roberts , David Rientjes , Axel Rasmussen , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, James Houghton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240401_163003_100314_A1B972D8 X-CRM114-Status: GOOD ( 23.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Participate in bitmap-based aging while grabbing the KVM MMU lock for reading. Ideally we wouldn't need to grab this lock at all, but that would require a more intrustive and risky change. Also pass KVM_PGTABLE_WALK_SHARED, as this software walker is safe to run in parallel with other walkers. It is safe only to grab the KVM MMU lock for reading as the kvm_pgtable is destroyed while holding the lock for writing, and freeing of the page table pages is either done while holding the MMU lock for writing or after an RCU grace period. When mkold == false, record the young pages in the passed-in bitmap. When mkold == true, only age the pages that need aging according to the passed-in bitmap. Suggested-by: Yu Zhao Signed-off-by: James Houghton --- arch/arm64/include/asm/kvm_host.h | 5 +++++ arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 21 ++++++++++++++------- arch/arm64/kvm/mmu.c | 23 +++++++++++++++++++++-- 4 files changed, 43 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9e8a496fb284..e503553cb356 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1331,4 +1331,9 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \ get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max)) +#define kvm_arch_prepare_bitmap_age kvm_arch_prepare_bitmap_age +bool kvm_arch_prepare_bitmap_age(struct mmu_notifier *mn); +#define kvm_arch_finish_bitmap_age kvm_arch_finish_bitmap_age +void kvm_arch_finish_bitmap_age(struct mmu_notifier *mn); + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 19278dfe7978..1976b4e26188 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -644,6 +644,7 @@ kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); * @addr: Intermediate physical address to identify the page-table entry. * @size: Size of the address range to visit. * @mkold: True if the access flag should be cleared. + * @range: The kvm_gfn_range that is being used for the memslot walker. * * The offset of @addr within a page is ignored. * @@ -657,7 +658,8 @@ kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); * Return: True if any of the visited PTEs had the access flag set. */ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, - u64 size, bool mkold); + u64 size, bool mkold, + struct kvm_gfn_range *range); /** * kvm_pgtable_stage2_relax_perms() - Relax the permissions enforced by a diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3fae5830f8d2..e881d3595aca 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1281,6 +1281,7 @@ kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) } struct stage2_age_data { + struct kvm_gfn_range *range; bool mkold; bool young; }; @@ -1290,20 +1291,24 @@ static int stage2_age_walker(const struct kvm_pgtable_visit_ctx *ctx, { kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF; struct stage2_age_data *data = ctx->arg; + gfn_t gfn = ctx->addr / PAGE_SIZE; if (!kvm_pte_valid(ctx->old) || new == ctx->old) return 0; data->young = true; + /* - * stage2_age_walker() is always called while holding the MMU lock for - * write, so this will always succeed. Nonetheless, this deliberately - * follows the race detection pattern of the other stage-2 walkers in - * case the locking mechanics of the MMU notifiers is ever changed. + * stage2_age_walker() may not be holding the MMU lock for write, so + * follow the race detection pattern of the other stage-2 walkers. */ - if (data->mkold && !stage2_try_set_pte(ctx, new)) - return -EAGAIN; + if (data->mkold) { + if (kvm_gfn_should_age(data->range, gfn) && + !stage2_try_set_pte(ctx, new)) + return -EAGAIN; + } else + kvm_gfn_record_young(data->range, gfn); /* * "But where's the TLBI?!", you scream. @@ -1315,10 +1320,12 @@ static int stage2_age_walker(const struct kvm_pgtable_visit_ctx *ctx, } bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, - u64 size, bool mkold) + u64 size, bool mkold, + struct kvm_gfn_range *range) { struct stage2_age_data data = { .mkold = mkold, + .range = range, }; struct kvm_pgtable_walker walker = { .cb = stage2_age_walker, diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 18680771cdb0..104cc23e9bb3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1802,6 +1802,25 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) return false; } +bool kvm_arch_prepare_bitmap_age(struct mmu_notifier *mn) +{ + struct kvm *kvm = mmu_notifier_to_kvm(mn); + + /* + * We need to hold the MMU lock for reading to prevent page tables + * from being freed underneath us. + */ + read_lock(&kvm->mmu_lock); + return true; +} + +void kvm_arch_finish_bitmap_age(struct mmu_notifier *mn) +{ + struct kvm *kvm = mmu_notifier_to_kvm(mn); + + read_unlock(&kvm->mmu_lock); +} + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size = (range->end - range->start) << PAGE_SHIFT; @@ -1811,7 +1830,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, - size, true); + size, true, range); } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1823,7 +1842,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, - size, false); + size, false, range); } phys_addr_t kvm_mmu_get_httbr(void)