From patchwork Mon Jan 9 21:53:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13094423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C9A4C5479D for ; Mon, 9 Jan 2023 22:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=NaIhAgVDLjuRb3UoimDBUdZOY/82iVBt6yNe4qFhS7I=; b=Q1rWWlO03igHNzfoDZV+o/u7MH gufZACcZ6FXcjWZY+Rq78Wmr7CIofTklACCwxT91H5p729xhLmwE8opVXr18iJI77+WuYaDKyd+1U AZVLRRaOafZgxYepMg7oUoVM1foxCCnee2+RllbBNBsHSrqhtjLv3Mildbk8BBD4Dpm6Dbojf3LYN yxnzT/bN08lTUWh0mjPwckHlovDwc42h2KNE2CxX60kkO0tmNjodaC6ufGtJ2LqxSOnEmwZ5jP+Gt iwh4duAmJypM28GXwjBzQl+2CVC7IP+HhcK6pLVqIUDfFxm4K9sXHlWlYfwKcvfZUv2elyTwO0hT8 GLQ8KIPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pF0CT-004PJo-US; Mon, 09 Jan 2023 22:01:09 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pF05v-004Mwg-Sc for linux-arm-kernel@lists.infradead.org; Mon, 09 Jan 2023 21:54:17 +0000 Received: by mail-io1-xd4a.google.com with SMTP id 16-20020a5d9c10000000b00702de2ee669so5157786ioe.10 for ; Mon, 09 Jan 2023 13:54:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QnWQSYtdYbIpMuZp6LkKgpjLfcHlUZmLQRKHyPWiT+U=; b=SoQHKoORituEmv9rUcL664DtdyreklZiFGEKpPbMIqEEDHQRWBI7SRr3f7cEoqCsvZ 1e7IZZ836ejawXXum/N1EwK5B4IqD2gR6GsFJWCf92qXR9lLgyBN3G4c5l9JL/DOIgzd 5SoZlkxgAJogZrpvMLXlxuFB6zLuebhc5CnJ3Lvd6tWm5FooSJa7Awa5clf+HtqEP2aG eQESQnMpMQOrH2ch9tyF4Sn5bZVnHYL9cgG5M0HGBrBLO1SgEiQFRQ0FQspsxC0pC7hs ur+0YtUUCtgQF8i/Lg4Oe9SZDzJBgNK97UxeExA2fo2dWsRWc7G5+liOox0x9OsztI9W /J9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QnWQSYtdYbIpMuZp6LkKgpjLfcHlUZmLQRKHyPWiT+U=; b=cQwRCtGcSZQiegCC7LvTqvJYmNgrsfpAbmVQ/42nxYmjV4gEmecfxsRkkMaM/Na5n8 zlTIealS4n1Ru+0uj14LMePlgTXEVoUkvQhWvu43XuR2nJpEkNlyzioeDBEg+hM4G/Cl ePn6Mo8YhXpw+mUQLKIKyIsy2bJahDLm48CDLZWkcsWQzOVTEz168EAQXG0iKSIvknpR mxFWPTkN8GKyR925pD4C5w7vWPmT56863PJdoi+Wip657XlMdQwyQ84mDiHhBnsJLAMs GEykAkzhgMZ5tGWrRsqjQgj1OC+W2d3or4404Yek83rdT0ngF5c2dYCEaMh7dA9a4Zum 5VMA== X-Gm-Message-State: AFqh2kpoUsShy7iNv8PfKWzGcYO/LxgCHJKs8uaEf10pa6AnDsaUCrPj tpiknpaHS+VtvE4R7GwL2p8XQWu+CPQL X-Google-Smtp-Source: AMrXdXuEKpemaG/E8lRrZNDz8BRpwdnTj1XFG2QPwNU3UfWi6vqI2Uo/p3n228o5g0euwm8Gw3DaqOUCn3p1 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:4a11:0:b0:300:e879:8094 with SMTP id m17-20020a924a11000000b00300e8798094mr6116771ilf.153.1673301252929; Mon, 09 Jan 2023 13:54:12 -0800 (PST) Date: Mon, 9 Jan 2023 21:53:47 +0000 In-Reply-To: <20230109215347.3119271-1-rananta@google.com> Mime-Version: 1.0 References: <20230109215347.3119271-1-rananta@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230109215347.3119271-7-rananta@google.com> Subject: [RFC PATCH 6/6] KVM: arm64: Create a fast stage-2 unmap path From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230109_135416_006088_707ED6C4 X-CRM114-Status: GOOD ( 16.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The current implementation of the stage-2 unmap walker traverses the entire page-table to clear and flush the TLBs for each entry. This could be very expensive if the VM is not backed by hugepages. The unmap operation could be made efficient by disconnecting the table at the very top (level at which the largest block mapping can be hosted) and do the rest of the unmapping using free_removed_table(). If the system supports FEAT_TLBIRANGE, flush the entire range that has been disconnected from the rest of the page-table. Suggested-by: Ricardo Koller Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 099032bb01bce..7bcd898de2805 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1021,6 +1021,49 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } +/* + * The fast walker executes only if the unmap size is exactly equal to the + * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL), + * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can + * be disconnected from the rest of the page-table without the need to + * traverse all the PTEs, at all the levels, and unmap each and every one + * of them. The disconnected table can be freed using free_removed_table(). + */ +static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + kvm_pte_t *childp = kvm_pte_follow(ctx->old, mm_ops); + struct kvm_s2_mmu *mmu = ctx->arg; + + if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MIN_BLOCK_LEVEL) + return 0; + + if (!stage2_try_break_pte(ctx, mmu, 0)) + return -EAGAIN; + + /* + * Gain back a reference for stage2_unmap_walker() to free + * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1. + */ + mm_ops->get_page(ctx->ptep); + + mm_ops->free_removed_table(childp, ctx->level); + return 0; +} + +static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm_pgtable_walker walker = { + .cb = fast_stage2_unmap_walker, + .arg = pgt->mmu, + .flags = KVM_PGTABLE_WALK_TABLE_PRE, + }; + + if (size == kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL)) + kvm_pgtable_walk(pgt, addr, size, &walker); +} + int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { struct kvm_pgtable_walker walker = { @@ -1029,6 +1072,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; + kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size); return kvm_pgtable_walk(pgt, addr, size, &walker); }