From patchwork Mon Feb 6 17:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A925CC636D3 for ; Mon, 6 Feb 2023 17:26:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zNzL4ajESgeTvwRo4o012sjvq5S1pb+9cWk4Dv5wBSU=; b=rDbR3s8v6NlYTv7CmJP9fF5Vy3 OhusDgPFkxn/EEl4dlEP10jeuLZRxVU9aY/eEZRIxpwHTg8oaexpQnHOwMd7e3FptF6YE/wfNlprO hxbzMk24y1G4V/FkVAZDY+MF06XQ/X2Ys5FuMnctEnxoH2g5S8R8+PN31asbKT26KuXQL10NVlrPN UJepAy5Iq2GBPHtNP08Ti78YWltkxM4Y6s56YM9WPYXEd3/m/gYl6OJPJ3y0V+5xhj6jgVYdRc9v7 gr3Cpmv5wNkGYht8EQx6tIsYtFpU0pGcfZdd34sK8pbfswukTbsnwDRDWRaI8kx6SVLe6ptCmO+xS IT3I+IhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pP5FK-009Try-B2; Mon, 06 Feb 2023 17:25:38 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pP5Dg-009T1m-1K for linux-arm-kernel@lists.infradead.org; Mon, 06 Feb 2023 17:23:57 +0000 Received: by mail-yb1-xb49.google.com with SMTP id i17-20020a25bc11000000b007b59a5b74aaso12013009ybh.7 for ; Mon, 06 Feb 2023 09:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=sbrP00/Yk9tKJfGQc4FLq70ucmYmbgy5/lzyIAHSS74GMLHYKuVcEW7frbq41WNd7X O3J9Iifetr3zZa/9AgWSpTNIta3332QfCY58JHRG52qqRnCLTofCIe7Vvk+6CD5vy0tH q/pPLvD9In0+0piPhUnB5gPX5M8nCI1OkVYBDfXwFXyySn2nGpsn3y+Qxe9jDa2s576B tzcD+5DLiJSCgawG0L6rfscpHPOIDdvEB96UGjKAqynaNJju6eHOT/CFpjW8mVqdatcC BNHbAP6Fv21oQE4k9kPEXfVZmc3lrDgDAehHqXKLQ4X1k8i7N1i+CheT9X0b3tSlsNRX 3N9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=qp9j28lHbYfVBwyl2FK6OBXbW0Pd/MIFsrRA3QPZKcwlmd9VAbb1TBIPkihBk0dKXV UPQM1ivS5EocKhl4mPYG94xJDqJl8X42qOr6WCkiTFa1l4A60cAHhAl7YbRPD+UIgJfO at99eFZn2ARm3gBROj6o91PNhWPGzjKzQjbLd8Dwq9vzib/90JcpQ6J/ylBhReNZNStb ojLXpfDfxLwtNco2MIfHWAYjMvhi2OmD/lGVZlyzdufVbMDQQrZAKZwcLgEXR+LH+KYk O4MM2SA5DD56T1m45zd0+rOeZkV2+38PUfogdILADPGIREdcIqOxTbnAJ2LBYu1FjOSa rctQ== X-Gm-Message-State: AO0yUKXj5wLEyIzcVmL3O+C3NIUPETqkG6HtvkO+FBLBoP13pYRRpjFd 6ZmjlV8+rsIguU9KUn5EDnDLw9KeWGgl X-Google-Smtp-Source: AK7set/7ERS8QZmp236U3VvlNKmAF4kQJ8IiQT3dBm5M8Gtg+WXeMrtq2ewrkogC8KUiVC864iRmNaPt/E6o X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:29d:b0:521:db02:1011 with SMTP id bf29-20020a05690c029d00b00521db021011mr0ywb.1.1675704234298; Mon, 06 Feb 2023 09:23:54 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:40 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-8-rananta@google.com> Subject: [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230206_092356_111861_46A03BE3 X-CRM114-Status: GOOD ( 16.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The current implementation of the stage-2 unmap walker traverses the entire page-table to clear and flush the TLBs for each entry. This could be very expensive, especially if the VM is not backed by hugepages. The unmap operation could be made efficient by disconnecting the table at the very top (level at which the largest block mapping can be hosted) and do the rest of the unmapping using free_removed_table(). If the system supports FEAT_TLBIRANGE, flush the entire range that has been disconnected from the rest of the page-table. Suggested-by: Ricardo Koller Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0858d1fa85d6b..af3729d0971f2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1017,6 +1017,49 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } +/* + * The fast walker executes only if the unmap size is exactly equal to the + * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL), + * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can + * be disconnected from the rest of the page-table without the need to + * traverse all the PTEs, at all the levels, and unmap each and every one + * of them. The disconnected table is freed using free_removed_table(). + */ +static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + kvm_pte_t *childp = kvm_pte_follow(ctx->old, mm_ops); + struct kvm_s2_mmu *mmu = ctx->arg; + + if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MIN_BLOCK_LEVEL) + return 0; + + if (!stage2_try_break_pte(ctx, mmu)) + return -EAGAIN; + + /* + * Gain back a reference for stage2_unmap_walker() to free + * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1. + */ + mm_ops->get_page(ctx->ptep); + + mm_ops->free_removed_table(childp, ctx->level); + return 0; +} + +static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm_pgtable_walker walker = { + .cb = fast_stage2_unmap_walker, + .arg = pgt->mmu, + .flags = KVM_PGTABLE_WALK_TABLE_PRE, + }; + + if (size == kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL)) + kvm_pgtable_walk(pgt, addr, size, &walker); +} + int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { struct kvm_pgtable_walker walker = { @@ -1025,6 +1068,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; + kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size); return kvm_pgtable_walk(pgt, addr, size, &walker); }