From patchwork Mon Feb 6 17:23:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED785C636D4 for ; Mon, 6 Feb 2023 17:24:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229996AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229778AbjBFRYD (ORCPT ); Mon, 6 Feb 2023 12:24:03 -0500 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0338C298F7 for ; Mon, 6 Feb 2023 09:23:49 -0800 (PST) Received: by mail-il1-x14a.google.com with SMTP id j7-20020a056e02014700b00310d217f518so8390819ilr.2 for ; Mon, 06 Feb 2023 09:23:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I0XHQF3wFWbgtteCR59qm3StUISJj2Yko9/2XqErnNg=; b=btBNnRAc7wt8tsxJP6uN/M06/M4SZpYblnw+qwfYZIv4gmJghOeEjTTGvl1OYIjkLo L0Vv/dmjv0BXJhLWNtoBkowsRGyLx4mp0h236DEdiO5EXrxLJpYaTZNQuhTBbegOowqw sb07mVRDbB4SVcO7OS1hJXB0TGRelTjis2Dg5znefWhQ4ZND6zFSPIrxYyYlH1rTgFCR MLOL/lmwOkKjSfqRHATn0fVd4ZoN43BY0xulwCjz5tbZ6FMq+oEQd2YQ8f8PAoOPJXgb ATmp1ZNZ+umx3GhBRv1/EW67B4E17tXTOnwa0XbznyawTlp2k+lbjt04lK/yQfLz81le SJYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I0XHQF3wFWbgtteCR59qm3StUISJj2Yko9/2XqErnNg=; b=E3yV9zMtn1YIoL3RCcw5RU7hI90iHDvtKeBGAQ4YmiFFo27MXQoGMyFLL1jDG/gu4d yHPD9kW1pHrJCy5fUDTBQlCpeiHmOfotIomrOgimFaV1nasRRhGUlo+oo13wtWDcNPta pUJ17YiQS7l9F/IF/sSjs1JsnhgDD2LUkq0z74qkxqh2uJaqJS6umixEoxhNZvBb7bpW eNWJjnbneR8OVB1Y+RiWgdP3Pw/jzC5m6XY/6hcAfs7CU85J7XVht/dXN9cuzdbpcd4U oJvnexNwQCHTu9BOgazZPgjTMeA/Y3Dm77m9Ab8TiqDZ0zsqncedtMs/ct4lUmytW88n BJCw== X-Gm-Message-State: AO0yUKUJ5WDQBRILdlZZ7JYEq20CwUjsqG6jDvYGN/cefhbrMPN/QeOh ZphWqOx0t6mnX6u2x8yKbYls661gCgGR X-Google-Smtp-Source: AK7set8m5os9pgRD9ZWypJHZlTWrqpUxhtGI32tM5bufh7Y7K68C5uAYXAjJ3KtZhwMvtEjkQXo+9+T6t238 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:61b:b0:3a1:336:ad86 with SMTP id g27-20020a056638061b00b003a10336ad86mr9641jar.119.1675704228432; Mon, 06 Feb 2023 09:23:48 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:34 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-2-rananta@google.com> Subject: [PATCH v2 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/tlbflush.h | 107 +++++++++++++++--------------- 1 file changed, 54 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..9a57eae14e576 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,60 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, asid, tlb_level, tlbi_user) do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +353,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Mon Feb 6 17:23:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE69C63797 for ; Mon, 6 Feb 2023 17:24:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230177AbjBFRYG (ORCPT ); Mon, 6 Feb 2023 12:24:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229873AbjBFRYD (ORCPT ); Mon, 6 Feb 2023 12:24:03 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 164302B086 for ; Mon, 6 Feb 2023 09:23:50 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id z9-20020a25ba49000000b007d4416e3667so12265020ybj.23 for ; Mon, 06 Feb 2023 09:23:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BkWHRJ3DaV8HyItO3IxH/uOsr0AuzsEy+IWd0D6gRm0=; b=gDebpZOlByZMcipJS7iiuEaEoz2LNOZhbfjLuHHnTzbhZvH4gPrm4N5h+hXmgtUVft sBsxc7NH3DIOlVtD10BAJGlhmkwzLmogWYFoJs75t0sGYS78+904yzwoYUpRcqYIdZOC Izj40XAX4urHpYj2KXU9Uborl412/oBiHYOR+vPy/sQ+dwLzU2UsJo3sVKpTKOGZtBdr XYxW4tjeM7GYcDvx8ygtuvIjH4aCXWwWKx7PizQkoTYhJk5w3afxDp5BHjrSL9uL7X3a sPqadJaV7P2oGojYZ5LLqsjeGRZroWjZdrQRfrgtR2lF1mbNkF0I+xCdw2/zFn/VXQmr KDjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BkWHRJ3DaV8HyItO3IxH/uOsr0AuzsEy+IWd0D6gRm0=; b=ggrawtOHyg/iP9ls1YPXNucyvqD3erPInUmUN0sN8Nii9eftfeZY77LyXAzVfevt5f GRI0cgtWW40H40rJL3mzhAkCxCVhxLs7GfgXIrUOlq+TJBBr/L0wT91T5AzhX4EdB70Q iLq1WdYZhlmFDmklEGKc//DN4qNnaXwO7NDcWS90OctO80T/Ixp1z/kMsNFJ2Me0numB pPYdEiL+m1WTcangk1s8KuNlai+/9UrioK0l0k3R8212cVjSpCf5FIu2cGsZgpZFXWGs F06uj1WKM9dWEktkfT8+Y/PTqBUq8jFrhRucPQsuXiMIsUU1JOszf1N3jUOO80LCl+3o p12A== X-Gm-Message-State: AO0yUKUDsoTYWOmybNaYSwu2m2MqzVaXn2rhOUDWpY4/7QXzSasOJvcQ 9MWjl3EDR3m++2a2zI1paSK6yx+uuVSQ X-Google-Smtp-Source: AK7set+MnrswiwWWU7VjVXEwJFGUTZwq2+FDuhGHpgxo1swkM3BdaMOiSD9i23X6Wo2T0up/ZIHq2NMFv4Er X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:4e95:0:b0:527:b484:aa14 with SMTP id c143-20020a814e95000000b00527b484aa14mr602256ywb.263.1675704229352; Mon, 06 Feb 2023 09:23:49 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:35 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-3-rananta@google.com> Subject: [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define a generic function __kvm_tlb_flush_range() to invalidate the TLBs over a range of addresses. The implementation accepts 'op' as a generic TLBI operation. Upcoming patches will use this to implement IPA based TLB invalidations (ipas2e1is). If the system doesn't support FEAT_TLBIRANGE, the implementation falls back to flushing the pages one by one for the range supplied. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544d..995ff048e8851 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -221,6 +221,24 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end); DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs) +#define __kvm_tlb_flush_range(op, mmu, start, end, level, tlb_level) do { \ + unsigned long pages, stride; \ + \ + stride = kvm_granule_size(level); \ + start = round_down(start, stride); \ + end = round_up(end, stride); \ + pages = (end - start) >> PAGE_SHIFT; \ + \ + if ((!system_supports_tlb_range() && \ + (end - start) >= (MAX_TLBI_OPS * stride)) || \ + pages >= MAX_TLBI_RANGE_PAGES) { \ + __kvm_tlb_flush_vmid(mmu); \ + break; \ + } \ + \ + __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false); \ +} while (0) + extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, From patchwork Mon Feb 6 17:23:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A22C636D4 for ; Mon, 6 Feb 2023 17:24:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229957AbjBFRYI (ORCPT ); Mon, 6 Feb 2023 12:24:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229919AbjBFRYE (ORCPT ); Mon, 6 Feb 2023 12:24:04 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EED202B617 for ; Mon, 6 Feb 2023 09:23:50 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id n18-20020a056e02101200b0030f2b79c2ffso8514299ilj.20 for ; Mon, 06 Feb 2023 09:23:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HShlHRYMfKfRQSfEY5d9PUEY0vusoRIVcg8R7I5JtSo=; b=KFxpvgYxyiHxlMf2kQXY8O7Vohrk2xKMdNeZHDLHgtSxKzi5XFednKMFazo7zN0g78 pHamNJHi0NB0xPRWtg7XSuph++snmq8X/mXf+v8P19DeJTrxJx7n1xOAUc8/lDYK0QLo xiqV/MtYIXgl+7OFWheeL9dxdw54qLmGzVDOWAtMt0M/Ne8BolgoCEvOynt1cVUyK9RY OtkeMk/n1bmTUHXHtRKHl05QdrYycuHcPG1X149XVXiiThRv86Ac3hE0O1CXCoXo8i7Z qK6M4Da9UnpHsAK2d4hSQ1Wo5O6cqG/hV8I8eA2nNI0pZag0tnKnjald5fDghJch3EC9 IQcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HShlHRYMfKfRQSfEY5d9PUEY0vusoRIVcg8R7I5JtSo=; b=j/VnLBvvlSyO4vTsmStVoRjPk0t+kf7ZRMf+99Wy5LH749SGy/Jjwg1s+t4Kv7vr3c CT+Ov+eKCfkmCjEEdBgoH5Zn/91EF1FVUf2PbUN/QPrtAPxFc/zHqOl/Z/88AZznLKkf 1muZ0EhOf5fa82Z2q2SK/N184B70hW/Qf7HbOKho/wyC5V0dVO5AVYop1jJCNGEYGYIo D22+iQxlWwqsp2DqMsmGzAhPFlwjRSDDxZnJ3DMDhO02rN6/auXPgUKUJbF7UaXi/T4U SttT7XgWwYgOAMw1UY10mE29K9FsNpdeFtuV504vfGh5l1E6MvK1m43Wi1QM0duB283w TqwQ== X-Gm-Message-State: AO0yUKW15A8TXsjsCAx+N5t6vRfkIZZ5LVsQPcmPLEcWttx92aVp1Urk 5lwIal9b0+vQOk+9ROKX+bZsCb36ksOi X-Google-Smtp-Source: AK7set/9GAC28caZmQydbqalQDK26Zt8xTg53hi5Ypa/Bv48w9Y1qLEuLIHz5iY/6yq1c0y2Fxyz72FHxLL2 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:714:b0:72c:d79b:bd3d with SMTP id f20-20020a056602071400b0072cd79bbd3dmr1888748iox.49.1675704230413; Mon, 06 Feb 2023 09:23:50 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:36 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-4-rananta@google.com> Subject: [PATCH v2 3/7] KVM: arm64: Implement __kvm_tlb_flush_range_vmid_ipa() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define __kvm_tlb_flush_range_vmid_ipa() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 12 ++++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 28 ++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 24 ++++++++++++++++++++++++ 4 files changed, 67 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 995ff048e8851..80a8ea85e84f8 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_range_vmid_ipa, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] @@ -243,6 +244,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t start, + phys_addr_t end, int level, int tlb_level); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b0..5787eee4c9fe4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,17 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void handle___kvm_tlb_flush_range_vmid_ipa(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(phys_addr_t, end, host_ctxt, 3); + DECLARE_REG(int, level, host_ctxt, 4); + DECLARE_REG(int, tlb_level, host_ctxt, 5); + + __kvm_tlb_flush_range_vmid_ipa(kern_hyp_va(mmu), start, end, level, tlb_level); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +326,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_range_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f5896..7398dd00445e7 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,34 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t start, + phys_addr_t end, int level, int tlb_level) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __kvm_tlb_flush_range(ipas2e1is, mmu, start, end, level, tlb_level); + + /* + * Range-based ipas2e1is flushes only Stage-2 entries, and since the + * VA isn't available for Stage-1 entries, flush the entire stage-1. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..e9c1d69f7ddf7 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,30 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_range_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t start, + phys_addr_t end, int level, int tlb_level) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __kvm_tlb_flush_range(ipas2e1is, mmu, start, end, level, tlb_level); + + /* + * Range-based ipas2e1is flushes only Stage-2 entries, and since the + * VA isn't available for Stage-1 entries, flush the entire stage-1. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Mon Feb 6 17:23:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C519CC64EC3 for ; Mon, 6 Feb 2023 17:24:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229778AbjBFRYK (ORCPT ); Mon, 6 Feb 2023 12:24:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229700AbjBFRYE (ORCPT ); Mon, 6 Feb 2023 12:24:04 -0500 Received: from mail-ot1-x349.google.com (mail-ot1-x349.google.com [IPv6:2607:f8b0:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A2592BED8 for ; Mon, 6 Feb 2023 09:23:52 -0800 (PST) Received: by mail-ot1-x349.google.com with SMTP id x14-20020a9d6d8e000000b0068bd4aa4439so6540261otp.20 for ; Mon, 06 Feb 2023 09:23:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/WQ4tanGHLD0a6b2FjQ+bGKlMDGPgiC81Lwl9+tXSgw=; b=HlgZK7drAF1NkubuaYDgOFLWzGKsIDZuhRNRe/ggGLvHzyeMYq0ufuk/xC6nCR4GiB tKo0COOrCG5vOobMn8v1RmXXnAquTm82KSrqpuApRFowqKsxJY2sz8w92mzEpzZzndi4 VnHxdsSemETwYgEEzspJDMOQ1zmfKc+noNBCE6o5ezFaVtwASt64WNKj7ZLY7LI0DVnS 2LShvJ1A7kZo2YlD9IKgLs6SxJRdD9l4V1x0oA+YM1Qu+bERGrjFAW7OJ4j7RIbZZOfO yxVOazXmx9ju+W3AKI2Tkb6yABPPk2cA8DLoquRA7XhVUbQHU1z7f9s/iBzkTLRZ65Vq 4G/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/WQ4tanGHLD0a6b2FjQ+bGKlMDGPgiC81Lwl9+tXSgw=; b=cUnRmSRKLFyoiOov28TIxdH/F1n5a04LjqXZwJzijj3Tbn8YkNHOqF7NOpKmxKBPvS AHqwa4CD0xsii3WGTfcf1ikVZxjO3SHZ2pL37Id8DamJvbC9G2iYVSN30XNFfbgl+8jx s8GpTpBs+q3qUC7MCF37BXo+JXwdthjUbLBA8YTy4xfPcNKk5O2xsWV6PnnDGK7Ml85C uzXWi8QItqa9KDVO+zp2biYQH/oll2azFqrC5Y1xkQl97/Fi2XTfVWcRTWD7+kx7ICsQ RJqmNPUichEb0klJLWwj63PLOKKtqT68zzQo/2ySImxL1Sp42EAmnjSHEflv4+eC6Ny4 z1WA== X-Gm-Message-State: AO0yUKXdCkHatfRmMGQOLsMGvgsLTAbeUww42nu6j5wwL/Rh4AweekAg hDUoCRp1sMPt0HpshKviol9QjrRijH0m X-Google-Smtp-Source: AK7set8ffZWDVhuBcprr2QYwvUU/KBzrwfS8aKnRkn+PHHlIMUjSCY4ut4uQ2GZlkBZjiAH8CTJ0rI5PuQsJ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:a90f:b0:163:d167:809d with SMTP id eq15-20020a056870a90f00b00163d167809dmr1742923oab.8.1675704231413; Mon, 06 Feb 2023 09:23:51 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:37 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-5-rananta@google.com> Subject: [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement kvm_arch_flush_remote_tlbs_range() for arm64, such that it can utilize the TLBI range based instructions if supported. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index dee530d75b957..211fab0c1de74 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e98910a8d0af6..409cb187f4911 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + phys_addr_t start, end; + + if (!system_supports_tlb_range()) + return -EOPNOTSUPP; + + start = start_gfn << PAGE_SHIFT; + end = (start_gfn + pages) << PAGE_SHIFT; + + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu, + start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Mon Feb 6 17:23:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D893C636D3 for ; Mon, 6 Feb 2023 17:24:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230252AbjBFRYM (ORCPT ); Mon, 6 Feb 2023 12:24:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229961AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F3872BED9 for ; Mon, 6 Feb 2023 09:23:53 -0800 (PST) Received: by mail-io1-xd49.google.com with SMTP id y22-20020a5d94d6000000b007076e06ba3dso7381786ior.20 for ; Mon, 06 Feb 2023 09:23:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vuXcpW5hhZycWwjy1Q114Q+HT3FS23f7QbM5HtQY0Fg=; b=feKShA/7o9prxIHFhpNnY77ILQUPvgtpZXRk79I3iz0pxZR4XUQSXtxjBLVH+THqDa vcSpwZXazMbkumYlUMJRFzZWGBKqZxy7sElr6nYZ7v3kuQZaK1Z1KxDj01LfMeyT7mH3 6w5A5L2SljnoZgGQsrKJrYCohxcM9xKdu/qZvXTVBpE+tK6GENFkgHYRaOgTsMix1n7e uJzgmgooIUCuCP9t9WrcP10nQ2l9L6DGY3FMMbeO+6zNOmAis/kijSg7NFTZvj06P9+8 3BYpx3rzb2M0cUf0XIK3hGqrmehOJ8k5To3ZsQgGGxqTYW0tVAT0h8grc9edvmNMe+Yw uq3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vuXcpW5hhZycWwjy1Q114Q+HT3FS23f7QbM5HtQY0Fg=; b=ZKv/1rEv4RXO8pxV5ilI3Ik8PjNZLLrayu4hXbBqHHIOaYLSjOSBZlmiFrbVmRL+2G MIaP2QuyvvdZgXgvvxoV5VKRgstrekCwniumUCFuCVjK+lZArjlVKcdXtnoziBlE4UT9 MC2bJ6IAYblI1sTd6ZD3DD1g5aDBoFulbs5sE91/G2IdtUP0NsChuGJzaRbG0dlX9sIx sLxmgFaBcYaD8wqW/8dfd1Gj+2uMP9EHhctu8O6tvqbb2iH9oBdVfXD4VieIpvOrBwHj i9zr/NrJRIfEI7YCxFgcsnnjmcpGM0tw/8w5Ve0PksnAMhtpbd1MR4aom+3ve8YlxskQ PN7A== X-Gm-Message-State: AO0yUKVVJw+r4GMUOtQqDyFElSZnuGJ+TFSAHjq9uNADoSHlrd2uyow/ 119d8ysKXZfmKWLC+B05Y9Kdm6i66LaG X-Google-Smtp-Source: AK7set9A/bA5zzEfudk7dWL6RltnEMvURxG1hIJY97EY9SyVcwzO2bKsvygC60t64GkBOI+dM2FgiOiM83Q+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:398e:b0:704:dad2:863d with SMTP id bw14-20020a056602398e00b00704dad2863dmr3973430iob.60.1675704232681; Mon, 06 Feb 2023 09:23:52 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:38 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-6-rananta@google.com> Subject: [PATCH v2 5/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 409cb187f4911..3e33af0daf1d3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -981,7 +981,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Mon Feb 6 17:23:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D60F7C636D4 for ; Mon, 6 Feb 2023 17:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230270AbjBFRYO (ORCPT ); Mon, 6 Feb 2023 12:24:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229974AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEF8C2BEE0 for ; Mon, 6 Feb 2023 09:23:53 -0800 (PST) Received: by mail-io1-xd4a.google.com with SMTP id n64-20020a6b8b43000000b00719e397eac0so7442785iod.14 for ; Mon, 06 Feb 2023 09:23:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HD8UYRMgXgvHWUmdjVGh+ElciOrAtTAu4jFhw/x/rRQ=; b=nQ6CtqXZSzQGxB//GpjWfquAmvGTR5dWwlYFGiU/5px0GC3rJpOWnKhEsOUDQTahXm LPHYdrgMYTG/S8ZnZDRPwovCgJrDRFpIoyJEBvuXv3U7ZaE6HAEnMmjwkf2qSDmf6E75 iaVBiXApoWChsBVG+b6ZNyvbAfbeiKBN9u8AXLWxmyVllftR4WlPwwovRShcouDAwv/Y nO9eHoBL6tOog9VaS7KtVRKW1ZIZSspm4oiic72yxChlIsLTGUo7X8+UbVSEgwWiW77h nJHjFwiQJZCRAb1AA1Z5dHfFui2O2vsWrGPgJUAEEpRsoPLNefaDZ5xzuND0Tp33Os5W gTkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HD8UYRMgXgvHWUmdjVGh+ElciOrAtTAu4jFhw/x/rRQ=; b=RYZmFuwpa9S6V55Rb/mFcbPFsnWPqNfTqYC7bwpnBVBdFLcM5nUYV6TEsgymLW4j7w O/WQCFHCVaIpWh2uJy+snJZeXnC6Ad0KCGJYQjpLkGYBej0i/xlCPipkoMyBle/EUNmo xoG1a414Hm3ybG0IZzL++GxIDEMHK1xAXLfOfLqw23NYU1nda62MuORuWmtnkVccLF9o AC5zKI/KpHuzNqWatzo7P4tfmtgGUWtbW6aDyMuODGPvCUpKFWGTzy4KOiMW6TsvsTuD ze5fSFuRu9Pthx2SkyVyo8rkKxyq2+u9yISD+3PjHRobwGbr1U5zc2FMgF1Ls0qN0+VZ 4K9g== X-Gm-Message-State: AO0yUKW4CY8uhJwVUoQg+vTuQecD3RePPhZDORo9n7BIhozA8jRV7jTd dG7iZfbDNjXxm63GYNe4sFD+WMqjgf2n X-Google-Smtp-Source: AK7set8+L6PzdEluzAA7EliIwZ6FnESXn+a+MIRyEkM0BR9u+V5qGwhqAUgQH+Hx7Flqnt0qmwXo/Xhbh14F X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:4c:b0:3ae:b0c1:72fe with SMTP id a12-20020a056638004c00b003aeb0c172femr51327jap.2.1675704233397; Mon, 06 Feb 2023 09:23:53 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:39 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-7-rananta@google.com> Subject: [PATCH v2 6/7] KVM: arm64: Break the table entries using TLBI range instructions From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, when breaking up the stage-2 table entries, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. One of the problematic situation is collapsing table entries into a hugepage, specifically if the VM is faulting on many hugepages (say after dirty-logging). This creates a performance penality for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Hence, if the system supports it, use __kvm_tlb_flush_range_vmid_ipa() to flush only the range of pages governed by the table entry, while leaving other TLB entries alone. An upcoming patch also takes advantage of this when breaking up table entries during the unmap operation. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11cf2c618a6c..0858d1fa85d6b 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -686,6 +686,20 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old; } +static void kvm_pgtable_stage2_flush_range(struct kvm_s2_mmu *mmu, u64 start, u64 end, + u32 level, u32 tlb_level) +{ + if (system_supports_tlb_range()) + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, mmu, start, end, level, tlb_level); + else + /* + * Invalidate the whole stage-2, as we may have numerous leaf + * entries below us which would otherwise need invalidating + * individually. + */ + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); +} + /** * stage2_try_break_pte() - Invalidates a pte according to the * 'break-before-make' requirements of the @@ -721,10 +735,13 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * Perform the appropriate TLB invalidation based on the evicted pte * value (if any). */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) + if (kvm_pte_table(ctx->old, ctx->level)) { + u64 end = ctx->addr + kvm_granule_size(ctx->level); + + kvm_pgtable_stage2_flush_range(mmu, ctx->addr, end, ctx->level, 0); + } else if (kvm_pte_valid(ctx->old)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); From patchwork Mon Feb 6 17:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13130427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED38AC63797 for ; Mon, 6 Feb 2023 17:24:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230227AbjBFRYQ (ORCPT ); Mon, 6 Feb 2023 12:24:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbjBFRYF (ORCPT ); Mon, 6 Feb 2023 12:24:05 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6398C44BF for ; Mon, 6 Feb 2023 09:23:55 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id n139-20020a25da91000000b0086a113d139aso9174723ybf.3 for ; Mon, 06 Feb 2023 09:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=sbrP00/Yk9tKJfGQc4FLq70ucmYmbgy5/lzyIAHSS74GMLHYKuVcEW7frbq41WNd7X O3J9Iifetr3zZa/9AgWSpTNIta3332QfCY58JHRG52qqRnCLTofCIe7Vvk+6CD5vy0tH q/pPLvD9In0+0piPhUnB5gPX5M8nCI1OkVYBDfXwFXyySn2nGpsn3y+Qxe9jDa2s576B tzcD+5DLiJSCgawG0L6rfscpHPOIDdvEB96UGjKAqynaNJju6eHOT/CFpjW8mVqdatcC BNHbAP6Fv21oQE4k9kPEXfVZmc3lrDgDAehHqXKLQ4X1k8i7N1i+CheT9X0b3tSlsNRX 3N9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xe2d5VCHL8knVkIvOOdANfJnEofFJGQpmqgs9ZySRtc=; b=b58lPYXww0qL9aG6BnRi8QkOark2mGhRItJHOdg3FQIuztfcJop5gHjxMDRqr8RJzf jX/GkWL56VnLB5lrP1v90dgnda3HA3NVAACywPXCS8lz7UB5pcJ9zwLdtKvkWsEHqoJ9 uE7ld7/FhqIq7mrQyLbvC+KDmuvLF260HTLHKyGOT7EPe+9+IvVyw9cnDwRqPvNoObZd sCjjeoJC4WRs0qWlL3S3FIYwo0yLgAzZjtYo9gaaJVNCuOsFHrN3PY0iB96fT0i5kNsU OHis75cVSB+8Qs6kqtO/l+HoJ5tvxzUs3IRsXoo6A1vkFUIjnnSxANd0iLWxDc01vhA5 L44w== X-Gm-Message-State: AO0yUKV/Ay9kzvFExeBRalr4N6jVPQzJPwJgLPCk9D0uRatL9ElYvvE9 iNfg431/n6ivbdO+r9PyB1K4lIZls5AH X-Google-Smtp-Source: AK7set/7ERS8QZmp236U3VvlNKmAF4kQJ8IiQT3dBm5M8Gtg+WXeMrtq2ewrkogC8KUiVC864iRmNaPt/E6o X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:29d:b0:521:db02:1011 with SMTP id bf29-20020a05690c029d00b00521db021011mr0ywb.1.1675704234298; Mon, 06 Feb 2023 09:23:54 -0800 (PST) Date: Mon, 6 Feb 2023 17:23:40 +0000 In-Reply-To: <20230206172340.2639971-1-rananta@google.com> Mime-Version: 1.0 References: <20230206172340.2639971-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206172340.2639971-8-rananta@google.com> Subject: [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , Ricardo Koller , Reiji Watanabe , James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon Cc: Paolo Bonzini , Catalin Marinas , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the entire page-table to clear and flush the TLBs for each entry. This could be very expensive, especially if the VM is not backed by hugepages. The unmap operation could be made efficient by disconnecting the table at the very top (level at which the largest block mapping can be hosted) and do the rest of the unmapping using free_removed_table(). If the system supports FEAT_TLBIRANGE, flush the entire range that has been disconnected from the rest of the page-table. Suggested-by: Ricardo Koller Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0858d1fa85d6b..af3729d0971f2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1017,6 +1017,49 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } +/* + * The fast walker executes only if the unmap size is exactly equal to the + * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL), + * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can + * be disconnected from the rest of the page-table without the need to + * traverse all the PTEs, at all the levels, and unmap each and every one + * of them. The disconnected table is freed using free_removed_table(). + */ +static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + kvm_pte_t *childp = kvm_pte_follow(ctx->old, mm_ops); + struct kvm_s2_mmu *mmu = ctx->arg; + + if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MIN_BLOCK_LEVEL) + return 0; + + if (!stage2_try_break_pte(ctx, mmu)) + return -EAGAIN; + + /* + * Gain back a reference for stage2_unmap_walker() to free + * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1. + */ + mm_ops->get_page(ctx->ptep); + + mm_ops->free_removed_table(childp, ctx->level); + return 0; +} + +static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm_pgtable_walker walker = { + .cb = fast_stage2_unmap_walker, + .arg = pgt->mmu, + .flags = KVM_PGTABLE_WALK_TABLE_PRE, + }; + + if (size == kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL)) + kvm_pgtable_walk(pgt, addr, size, &walker); +} + int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { struct kvm_pgtable_walker walker = { @@ -1025,6 +1068,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; + kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size); return kvm_pgtable_walk(pgt, addr, size, &walker); }