From patchwork Tue Jun 6 19:28:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 52C13C7EE37 for ; Tue, 6 Jun 2023 19:29:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=M4lRiyOTsJlh2xNApuDYp/qyF3FpcGbjCk1DoBObIek=; b=VTcy+K5UXQmpHNS/Wb3GBDZ81P m3w0pUnrWgFd8HaJ0t7APWP8/zwbDtTw4HVHLtMCLNl5Hgsq9issDxJB6cmsT2pp37mSe2zt2XYcj +dGh5kWGU18zvh3fU4nsKE3ao0W93FjMqnsvPvpR96HKVJNF7rGQBZkX07Wc8CLVoRoXB+kuGkFJE T1X5dnTcpJfzH50QWnxEBFjh5El8LGZx/dgvBP6i/j5dYMJw6n2hHj8D6AEq1AM3VE8cvwIH4Gif2 cP5V4IkSrPRuFWJnGY2h0PdEGcCBKOowSxhDBxv2AzhKMJz7IKK30zbd9Xry7AH0EZuX/Bsj14+sd qR4vaPYw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6cMt-002xlk-0l; Tue, 06 Jun 2023 19:29:23 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q6cMg-002xcp-1G for linux-arm-kernel@lists.infradead.org; Tue, 06 Jun 2023 19:29:12 +0000 Received: by mail-io1-xd4a.google.com with SMTP id ca18e2360f4ac-769036b47a7so409776339f.0 for ; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=xRdfPtc42t3tWbTgM9ApieV/cDRl7yttpemwXRGTQ29qIVK/7EzJsoDi1w7500YQxK VkZroFYdAFjD4L8oeQoYclUdM3xt/ovyCUAsu5x/j9Pe6iUmDMSfCEArYp2QxzAQ5O6Z A/hHw65k++MQ3lkHpkZeObv/86BZddej7Q6Owxqr+0GDTHWYhA8Ekyxm69emOOf0VJXJ r1yUFL1fTbbmeDyUpxa5IqlBqW2Q9ZuwWKmdPrY9jzAqk7eMKbPnTCqjzQ6g6gu6Rezb Qs6bqyRlTvej95KtDkRzujhhevLRfzsMlKJ3gKk5lfozadujZQAR7894f9QC/T7/hDpu E4Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=bLgsTNzDEaegodLyFID4+ToSvbWG1dJiI4n2r5rhtfgQrLV9HC8ybt4vdC7ESOzo7D V159oZNKZhebQGFZu9Xgk23rZKbrAm1HICSUXPFjIcDH4D0HywW4of/5P8WLK39/izUf QKRpxsNCi2R9whkD0hB9EaVeNmANjhH9unhMoYGUWrEojH837lYpgwvqAB9C78fK19nj nfpyDwdtH8e/Xm79hpzcy8zHGbYtiG9mfWVMB2zspParEVDgbKLCWvd3oavrjlK4YKJn A2n1JutbUJCwNptq1/vc96GjdKZHFgB6A09WI3p55uHTog3QM5z2+AI/d+7KRbty0AoB ekEA== X-Gm-Message-State: AC+VfDyycK7eJAX/uOiLHm3QnlFgQ253eLnbhZezBWhDCZ94IJvGPRcl mKNlNCOe6wcDI2zfdBBEJKsBQ3qM+AcO X-Google-Smtp-Source: ACHHUZ7QCQzpJGrZKqBh5n3es8NB7ieKOI67W6rTPhyCwNnbXcHk5AAG6ZdYbhJ9YFPUUVxtemi/vNZ/436T X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5e:8345:0:b0:777:b726:3e17 with SMTP id y5-20020a5e8345000000b00777b7263e17mr1513935iom.3.1686079749368; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:58 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-8-rananta@google.com> Subject: [PATCH v5 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230606_122910_454293_302A0F23 X-CRM114-Status: GOOD ( 18.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 67 +++++++++++++++++++++++++++++++----- 1 file changed, 58 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 50ef7623c54db..c6e080867919d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -789,16 +789,54 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool defer_tlb_flush_init; +}; + +static bool __stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) +{ + /* + * If FEAT_TLBIRANGE is implemented, defer the individual + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition deferred TLB invalidation on the + * system supporting FWB, as the optimization is entirely + * pointless when the unmap walker needs to perform CMOs. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static bool stage2_unmap_defer_tlb_flush(struct stage2_unmap_data *unmap_data) +{ + bool defer_tlb_flush = __stage2_unmap_defer_tlb_flush(unmap_data->pgt); + + /* + * Since __stage2_unmap_defer_tlb_flush() is based on alternative + * patching and the TLBIs' operations behavior depend on this, + * track if there's any change in the state during the unmap sequence. + */ + WARN_ON(unmap_data->defer_tlb_flush_init != defer_tlb_flush); + return defer_tlb_flush; +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) { + struct stage2_unmap_data *unmap_data = ctx->arg; + /* - * Clear the existing PTE, and perform break-before-make with - * TLB maintenance if it was valid. + * Clear the existing PTE, and perform break-before-make if it was + * valid. Depending on the system support, the TLB maintenance for + * the same can be deferred until the entire unmap is completed. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(unmap_data)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1005,7 +1043,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt = ctx->arg; + struct stage2_unmap_data *unmap_data = ctx->arg; + struct kvm_pgtable *pgt = unmap_data->pgt; struct kvm_s2_mmu *mmu = pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; kvm_pte_t *childp = NULL; @@ -1033,7 +1072,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1047,13 +1086,23 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data = { + .pgt = pgt, + .defer_tlb_flush_init = __stage2_unmap_defer_tlb_flush(pgt), + }; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, - .arg = pgt, + .arg = &unmap_data, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(&unmap_data)) + /* Perform the deferred TLB invalidations */ + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); + + return ret; } struct stage2_attr_data {