From patchwork Wed Apr 14 11:22:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3393C433ED for ; Wed, 14 Apr 2021 11:26:53 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 56A90613C4 for ; Wed, 14 Apr 2021 11:26:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 56A90613C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pTJTYwdW5slmM2UrdO61tLF34JDuN+rHeUWUXS6p6cE=; b=BKU7ltOX38jp1lbGovZTS70pp jPdIUs+Js0MiyCR9v8kIbL+Q6Dwvv67UsyorF08GAJ7eGB3gveTYDxQZRH/ZEOr+GZtJRk0Adgt2x kVYp8+iPlueEcAOe7M4KAkpsl3rZz+uNCAUbAHLV4xfiAACHibq9uTRqZzJeb7AoSlME4rAWZAO20 0LD3diqnJWxN8MNy4bFbxEOggN1NRevzTWiJjm210ebA56rlaiDhOPNbm8eVGS4eKJOxdUaWnNGS7 f0CH914yIo6jbMYb+OP3vPu8f0qxjYrlCHrqgKdX9FN1pjGGTWWt4K5K8Wp+2aEO0YHjehVLip2rG W8IgAkzfg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWddd-00COir-9O; Wed, 14 Apr 2021 11:24:54 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddI-00COgE-Tc for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:24:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=dLSBlVGCszoIi9x4ebefKWEF1qX6qK+9XXZzqGSeP1k=; b=m3qPLJhKim/6mKytFHHVOqC8UQ 14QmlbAe2+8mL5RVhe1ujH77WIeDsepntGEWf/YSWXh/b5nbXT/MEWF1E/OBvX0AFtwfk7NYtX5y4 W8WZKAzvKNpOFXum/Rf1y947axYcky8c+DxHDkOv2TKx6JXSF/v5Sm+TU+bhQDhlbi5QF6AULUcKd zo1LWsIUVHHRIzenHXvLC87ugwsNtrthlHesMDsZXMwd9V92IAiCqvzryA72lI9vRW7c7f55QJAAj rBfKBqALD8fIKBzdALD+FbM7H+o6ExRc/ClZjik64NuaT3mDj/O+Dqlqtbb099VkQUAhrGQ6+DQc8 6cXVEzqA==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddF-007hzL-N0 for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:24:31 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FL0Ry0MWVzB0mp; Wed, 14 Apr 2021 19:22:06 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:13 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Date: Wed, 14 Apr 2021 12:22:57 +0100 Message-ID: <20210414112312.13704-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042430_104124_34A8B715 X-CRM114-Status: GOOD ( 18.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall In an attempt to make the ASID allocator generic, create a new structure asid_info to store all the information necessary for the allocator. For now, move the variables asid_generation, asid_map, cur_idx to the new structure asid_info. Follow-up patches will move more variables. Note to avoid more renaming aftwards, a local variable 'info' has been created and is a pointer to the ASID allocator structure. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3-->v4: Move cur_idx into asid_info. --- arch/arm64/mm/context.c | 71 +++++++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 31 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 001737a8f309..783f8bdb91ee 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -20,8 +20,12 @@ static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); -static atomic64_t asid_generation; -static unsigned long *asid_map; +static struct asid_info +{ + atomic64_t generation; + unsigned long *map; + unsigned int map_idx; +} asid_info; static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); @@ -88,26 +92,26 @@ static void set_kpti_asid_bits(unsigned long *map) memset(map, 0xaa, len); } -static void set_reserved_asid_bits(void) +static void set_reserved_asid_bits(struct asid_info *info) { if (pinned_asid_map) - bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS); + bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS); else if (arm64_kernel_unmapped_at_el0()) - set_kpti_asid_bits(asid_map); + set_kpti_asid_bits(info->map); else - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS); } -#define asid_gen_match(asid) \ - (!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits)) +#define asid_gen_match(asid, info) \ + (!(((asid) ^ atomic64_read(&(info)->generation)) >> asid_bits)) -static void flush_context(void) +static void flush_context(struct asid_info *info) { int i; u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - set_reserved_asid_bits(); + set_reserved_asid_bits(info); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); @@ -120,7 +124,7 @@ static void flush_context(void) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid2idx(asid), asid_map); + __set_bit(asid2idx(asid), info->map); per_cpu(reserved_asids, i) = asid; } @@ -155,11 +159,10 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } -static u64 new_context(struct mm_struct *mm) +static u64 new_context(struct asid_info *info, struct mm_struct *mm) { - static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = atomic64_read(&asid_generation); + u64 generation = atomic64_read(&info->generation); if (asid != 0) { u64 newasid = generation | (asid & ~ASID_MASK); @@ -183,7 +186,7 @@ static u64 new_context(struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), asid_map)) + if (!__test_and_set_bit(asid2idx(asid), info->map)) return newasid; } @@ -194,21 +197,21 @@ static u64 new_context(struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, info->map_idx); if (asid != NUM_USER_ASIDS) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); - flush_context(); + &info->generation); + flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); set_asid: - __set_bit(asid, asid_map); - cur_idx = asid; + __set_bit(asid, info->map); + info->map_idx = asid; return idx2asid(asid) | generation; } @@ -217,6 +220,7 @@ void check_and_switch_context(struct mm_struct *mm) unsigned long flags; unsigned int cpu; u64 asid, old_active_asid; + struct asid_info *info = &asid_info; if (system_supports_cnp()) cpu_set_reserved_ttbr0(); @@ -238,7 +242,7 @@ void check_and_switch_context(struct mm_struct *mm) * because atomic RmWs are totally ordered for a given location. */ old_active_asid = atomic64_read(this_cpu_ptr(&active_asids)); - if (old_active_asid && asid_gen_match(asid) && + if (old_active_asid && asid_gen_match(asid, info) && atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_asids), old_active_asid, asid)) goto switch_mm_fastpath; @@ -246,8 +250,8 @@ void check_and_switch_context(struct mm_struct *mm) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if (!asid_gen_match(asid)) { - asid = new_context(mm); + if (!asid_gen_match(asid, info)) { + asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -274,6 +278,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) { unsigned long flags; u64 asid; + struct asid_info *info = &asid_info; if (!pinned_asid_map) return 0; @@ -290,12 +295,12 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) goto out_unlock; } - if (!asid_gen_match(asid)) { + if (!asid_gen_match(asid, info)) { /* * We went through one or more rollover since that ASID was * used. Ensure that it is still valid, or generate a new one. */ - asid = new_context(mm); + asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -400,14 +405,18 @@ arch_initcall(asids_update_limit); static int asids_init(void) { + struct asid_info *info = &asid_info; + asid_bits = get_cpu_asid_bits(); - atomic64_set(&asid_generation, ASID_FIRST_VERSION); - asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), - GFP_KERNEL); - if (!asid_map) + atomic64_set(&info->generation, ASID_FIRST_VERSION); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), + GFP_KERNEL); + if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + info->map_idx = 1; + pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*pinned_asid_map), GFP_KERNEL); nr_pinned_asids = 0; @@ -418,7 +427,7 @@ static int asids_init(void) * and reserve kernel ASID's from beginning. */ if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) - set_kpti_asid_bits(asid_map); + set_kpti_asid_bits(info->map); return 0; } early_initcall(asids_init); From patchwork Wed Apr 14 11:22:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 712FBC433B4 for ; Wed, 14 Apr 2021 11:27:20 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5A35613C7 for ; Wed, 14 Apr 2021 11:27:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5A35613C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Yn82yuB0qOm/jf1NKfB0N3zF3eGxhq0tZs59t1lCFmk=; b=YdWgESiY8vHP5WpVqHPi9+vHm 9LoZrFSVOwEkcghVasZXMDaPdgwMbCe+VbaolZ3iGlZpFgQUW7n87ebzSMEwLLJ1gC3k/ocU7jQT5 6NpTxUEWJt2p2TYi+RshDKJu8kt0npE7fFD7LUcxED8t048tzQcLC0yc5HPIESqN7U3JYutV5m85v zJkgzV0RHrjaXcg6A3oyV0utk+kKGX6kXbe15igSlBusw1NY68X/Xhm8qTKfadL2rTwhWkfHPPZaq vUK474JoHLw5yaWdGrLqqhHqj9/3w/vZTD7c87hl3BIm5+767euPhILMEYOiHYrCkQAiUmMD7Rk8S wikL/fEDA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWddx-00COnm-Pn; Wed, 14 Apr 2021 11:25:14 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddV-00COhe-T3 for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:24:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ONKtZVOAVaFeEHdJurKBw95YcJMRdIAwdjZOodbNmFA=; b=2RtxsdQOZR3lGD7X8hQcXBqs9i FKcXpZSMvnFVVg7MVSBR6UsNinww4FfhDDq6rG7PXM/cJ55Y60Q3fUMCCKlwqznUwysn9jYeOHlwG qh2Knj2bggnwsgy7fN9vaLOFcyeBnFsJecVLJeLsreo4iaF6gYqj0iWqhtJ331HJC6GuoNBDpfr0l cmM09PgE3x1l61mfGPaGRfL7ZMH+lihwcR9UDTw6Aui0TSqZ/CyCvv23J1HOtzjHQxbMYwzgCUScV qKepVE/kvdEDpFssCxUFrNIwAtA6dCakNhYQpkSsXrd9B48C0Qvu5HqyZ7ogWagHiCIyQUhF+tzD7 TS3ba7AQ==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddS-007i0J-Pb for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:24:44 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FL0SF5yLyz18Hbh; Wed, 14 Apr 2021 19:22:21 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:31 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 02/16] arm64/mm: Move active_asids and reserved_asids to asid_info Date: Wed, 14 Apr 2021 12:22:58 +0100 Message-ID: <20210414112312.13704-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042443_192107_1C3E720D X-CRM114-Status: GOOD ( 16.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall The variables active_asids and reserved_asids hold information for a given ASID allocator. So move them to the structure asid_info. At the same time, introduce wrappers to access the active and reserved ASIDs to make the code clearer. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3-->v4 keep the this_cpu_ptr in fastpath. See c4885bbb3afe("arm64/mm: save memory access in check_and_switch_context() fast switch path") --- arch/arm64/mm/context.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 783f8bdb91ee..42e011094571 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -25,8 +25,13 @@ static struct asid_info atomic64_t generation; unsigned long *map; unsigned int map_idx; + atomic64_t __percpu *active; + u64 __percpu *reserved; } asid_info; +#define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) +#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu)) + static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; @@ -114,7 +119,7 @@ static void flush_context(struct asid_info *info) set_reserved_asid_bits(info); for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); /* * If this CPU has already been through a * rollover, but hasn't run another task in @@ -123,9 +128,9 @@ static void flush_context(struct asid_info *info) * the process it is still running. */ if (asid == 0) - asid = per_cpu(reserved_asids, i); + asid = reserved_asid(info, i); __set_bit(asid2idx(asid), info->map); - per_cpu(reserved_asids, i) = asid; + reserved_asid(info, i) = asid; } /* @@ -135,7 +140,8 @@ static void flush_context(struct asid_info *info) cpumask_setall(&tlb_flush_pending); } -static bool check_update_reserved_asid(u64 asid, u64 newasid) +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) { int cpu; bool hit = false; @@ -150,9 +156,9 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) * generation. */ for_each_possible_cpu(cpu) { - if (per_cpu(reserved_asids, cpu) == asid) { + if (reserved_asid(info, cpu) == asid) { hit = true; - per_cpu(reserved_asids, cpu) = newasid; + reserved_asid(info, cpu) = newasid; } } @@ -171,7 +177,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * If our current ASID was active during a rollover, we * can continue to use it and this was just a false alarm. */ - if (check_update_reserved_asid(asid, newasid)) + if (check_update_reserved_asid(info, asid, newasid)) return newasid; /* @@ -229,8 +235,8 @@ void check_and_switch_context(struct mm_struct *mm) /* * The memory ordering here is subtle. - * If our active_asids is non-zero and the ASID matches the current - * generation, then we update the active_asids entry with a relaxed + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed * cmpxchg. Racing with a concurrent rollover means that either: * * - We get a zero back from the cmpxchg and end up waiting on the @@ -241,9 +247,9 @@ void check_and_switch_context(struct mm_struct *mm) * relaxed xchg in flush_context will treat us as reserved * because atomic RmWs are totally ordered for a given location. */ - old_active_asid = atomic64_read(this_cpu_ptr(&active_asids)); + old_active_asid = atomic64_read(this_cpu_ptr(info->active)); if (old_active_asid && asid_gen_match(asid, info) && - atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_asids), + atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active), old_active_asid, asid)) goto switch_mm_fastpath; @@ -259,7 +265,7 @@ void check_and_switch_context(struct mm_struct *mm) if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); - atomic64_set(this_cpu_ptr(&active_asids), asid); + atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: @@ -416,6 +422,8 @@ static int asids_init(void) NUM_USER_ASIDS); info->map_idx = 1; + info->active = &active_asids; + info->reserved = &reserved_asids; pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*pinned_asid_map), GFP_KERNEL); From patchwork Wed Apr 14 11:22:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A68D1C433B4 for ; Wed, 14 Apr 2021 11:27:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 213E0613C4 for ; Wed, 14 Apr 2021 11:27:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 213E0613C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Lwbh6IL+EZvODp3YyNrN7GcikWIergt9VpY5G8hxmMU=; b=XSRsCu7og2cihaB37C0u/lfPp fr7dLg7xWxYPjgx6JEoYjfsViOVm9nXzMDZ+5alC/8o4l2JjyQXHe9cc9jgOm+wp5JvR1jYgzrMIr 5CtTn3QHu4DtpdDRKqnoK4T7iD6tZUZbXzOutos+YPVUF0kfaOVTILEql5mjOYbOxavUF7AFbpuNf b3rN/fp4oLcEiKwX3bCxZXk/OSRVYwQZt18vX19rTmpN/1w9un8SrPna+HYE5fNUQbGqg/BjLw6oR Oqu/7nSsT+u7+Tts40ZF7ZUTRSSzoMeFw6XnUmGkDwjvhy8ygpQ67iQq9u4HXO+B6fRQHTdoAVrwM FUwTBl58Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWde9-00COqG-VM; Wed, 14 Apr 2021 11:25:26 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddY-00COiU-C6 for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:24:49 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LLJdGZf1YQOO8xjXLteuO1XY5Y1J6dYseAXCRvyGObY=; b=rbVLxyYOPX+IBj6YcKdqlSNpcU Z+DGmg722Tajb9u6gxAtdmUTu9CZ0G2oOSL5Ky1e4tC01uGAeyKUOQmMICTrXdTYG1Eqtrel3nUkU laIZ3J4s4gjsJMEu3YJn3OdsGMB0cbdeaCQElNW5pSTjA15JLgf19XDn4j+fT8Z4PwvTpblwkovc9 BA13BK3jCLv8mRRiBMndATAnwwNOnVD2ANlWyK9QZk9aJhVjBgp0zo/hDSjHpuoGUNvbzuQnuuOE6 97EIx2C56kA/BDhajBd64+LphhFOW9IYHIw/kq5AmVObJ79b8mYYpXH+qr+/bPQaLbU/r2Ph7xeh+ 9+Bxkq4w==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddV-007i0k-6s for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:24:47 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0Rf13ymzNvXv; Wed, 14 Apr 2021 19:21:50 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:35 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 03/16] arm64/mm: Move bits to asid_info Date: Wed, 14 Apr 2021 12:22:59 +0100 Message-ID: <20210414112312.13704-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042445_607147_E7A5E679 X-CRM114-Status: GOOD ( 19.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall The variable bits hold information for a given ASID allocator. So move it to the asid_info structure. Because most of the macros were relying on bits, they are now taking an extra parameter that is a pointer to the asid_info structure. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 70 +++++++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 42e011094571..1fd40a42955c 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -17,7 +17,6 @@ #include #include -static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); static struct asid_info @@ -27,6 +26,7 @@ static struct asid_info unsigned int map_idx; atomic64_t __percpu *active; u64 __percpu *reserved; + u32 bits; } asid_info; #define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) @@ -40,12 +40,12 @@ static unsigned long max_pinned_asids; static unsigned long nr_pinned_asids; static unsigned long *pinned_asid_map; -#define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << (info)->bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION -#define asid2idx(asid) ((asid) & ~ASID_MASK) -#define idx2asid(idx) asid2idx(idx) +#define NUM_USER_ASIDS(info) ASID_FIRST_VERSION(info) +#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) +#define idx2asid(info, idx) asid2idx(info, idx) /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -74,20 +74,20 @@ void verify_cpu_asid_bits(void) { u32 asid = get_cpu_asid_bits(); - if (asid < asid_bits) { + if (asid < asid_info.bits) { /* * We cannot decrease the ASID size at runtime, so panic if we support * fewer ASID bits than the boot CPU. */ pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n", - smp_processor_id(), asid, asid_bits); + smp_processor_id(), asid, asid_info.bits); cpu_panic_kernel(); } } -static void set_kpti_asid_bits(unsigned long *map) +static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map) { - unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long); + unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS(info)) * sizeof(unsigned long); /* * In case of KPTI kernel/user ASIDs are allocated in * pairs, the bottom bit distinguishes the two: if it @@ -100,15 +100,15 @@ static void set_kpti_asid_bits(unsigned long *map) static void set_reserved_asid_bits(struct asid_info *info) { if (pinned_asid_map) - bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS); + bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS(info)); else if (arm64_kernel_unmapped_at_el0()) - set_kpti_asid_bits(info->map); + set_kpti_asid_bits(info, info->map); else - bitmap_clear(info->map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); } #define asid_gen_match(asid, info) \ - (!(((asid) ^ atomic64_read(&(info)->generation)) >> asid_bits)) + (!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits)) static void flush_context(struct asid_info *info) { @@ -129,7 +129,7 @@ static void flush_context(struct asid_info *info) */ if (asid == 0) asid = reserved_asid(info, i); - __set_bit(asid2idx(asid), info->map); + __set_bit(asid2idx(info, asid), info->map); reserved_asid(info, i) = asid; } @@ -171,7 +171,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) u64 generation = atomic64_read(&info->generation); if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK); + u64 newasid = generation | (asid & ~ASID_MASK(info)); /* * If our current ASID was active during a rollover, we @@ -192,7 +192,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), info->map)) + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) return newasid; } @@ -203,22 +203,22 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, info->map_idx); - if (asid != NUM_USER_ASIDS) + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), info->map_idx); + if (asid != NUM_USER_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), &info->generation); flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); info->map_idx = asid; - return idx2asid(asid) | generation; + return idx2asid(info, asid) | generation; } void check_and_switch_context(struct mm_struct *mm) @@ -311,13 +311,13 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) } nr_pinned_asids++; - __set_bit(asid2idx(asid), pinned_asid_map); + __set_bit(asid2idx(info, asid), pinned_asid_map); refcount_set(&mm->context.pinned, 1); out_unlock: raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); - asid &= ~ASID_MASK; + asid &= ~ASID_MASK(info); /* Set the equivalent of USER_ASID_BIT */ if (asid && arm64_kernel_unmapped_at_el0()) @@ -330,6 +330,7 @@ EXPORT_SYMBOL_GPL(arm64_mm_context_get); void arm64_mm_context_put(struct mm_struct *mm) { unsigned long flags; + struct asid_info *info = &asid_info; u64 asid = atomic64_read(&mm->context.id); if (!pinned_asid_map) @@ -338,7 +339,7 @@ void arm64_mm_context_put(struct mm_struct *mm) raw_spin_lock_irqsave(&cpu_asid_lock, flags); if (refcount_dec_and_test(&mm->context.pinned)) { - __clear_bit(asid2idx(asid), pinned_asid_map); + __clear_bit(asid2idx(info, asid), pinned_asid_map); nr_pinned_asids--; } @@ -384,12 +385,13 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm) static int asids_update_limit(void) { - unsigned long num_available_asids = NUM_USER_ASIDS; + struct asid_info *info = &asid_info; + unsigned long num_available_asids = NUM_USER_ASIDS(info); if (arm64_kernel_unmapped_at_el0()) { num_available_asids /= 2; if (pinned_asid_map) - set_kpti_asid_bits(pinned_asid_map); + set_kpti_asid_bits(info, pinned_asid_map); } /* * Expect allocation after rollover to fail if we don't have at least @@ -413,19 +415,19 @@ static int asids_init(void) { struct asid_info *info = &asid_info; - asid_bits = get_cpu_asid_bits(); - atomic64_set(&info->generation, ASID_FIRST_VERSION); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), - GFP_KERNEL); + info->bits = get_cpu_asid_bits(); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS); + NUM_USER_ASIDS(info)); info->map_idx = 1; info->active = &active_asids; info->reserved = &reserved_asids; - pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), + pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), sizeof(*pinned_asid_map), GFP_KERNEL); nr_pinned_asids = 0; @@ -435,7 +437,7 @@ static int asids_init(void) * and reserve kernel ASID's from beginning. */ if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) - set_kpti_asid_bits(info->map); + set_kpti_asid_bits(info, info->map); return 0; } early_initcall(asids_init); From patchwork Wed Apr 14 11:23:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C30C1C433ED for ; Wed, 14 Apr 2021 11:27:34 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1E53461164 for ; Wed, 14 Apr 2021 11:27:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E53461164 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=46VVG/gG/of2UJ0t6XjlnRwsXRKRZvQ+V9nUArgUUmU=; b=OglUmfHgUvjyRu1Fiwgkq+RcR 3UaEdlRByl3xuGG8PPUVPLfvDe6SCxoutsggt179zf87SYRXz1NbvfMWAEqKtCmAEZr9MOh5EoX90 bsqqkTwyPlCrfNgtx3Y/jPW6MkfclZRmo0QuxGGD/EFsFH1s966SkQ0eG7tRvc+I4BnQCx+YjGQO7 DW7q/vHVDKzgRH2+v43SMCTl7mnPGGysfuAaupMyvzSsA5AHodfB2zNWfk9EVMpjcdrD7yL92Z4vq oV2TzXI2b1eekjIv7oFqGr/ab+80/0Pfr49hqdmlcfK7Q/xM4LyAOsgcwmSnbm6sdRq0RqaAy8v2F +tH/XY8zg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeQ-00COuQ-Nj; Wed, 14 Apr 2021 11:25:43 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddd-00COj3-Ho for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:24:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=WE9jGHcsvG/h5S1UesM49EOD5usYge8drRPUrtL2KdA=; b=E4RXPAok9SIkgtKHbEjK1wWax2 O3OfGIxQIwZ2RtUexOOdzZ+nfONV6Qv9tjWtQTQjz2eitLoaLiKyD1RZVFLQ0qzrXQRv/zmewZHe+ fHx7fFikxnbjFc3ushvC0lQ4QrRKxlNd0FV2kXBPHSftFQcixDmk0zfvrnC4ZkUJAdFtUqi8xho9L cecMU+jjVaiRwMwYuQ8lnvTzqY+znhaBohP7M5kB5MSoZknGA8Q9rTVYOPEAW6VyUIOUO+i3+W4cp 0aBLSKaIyS1KEcp4AY0XSQlInRw9aw84YH9CtWL0osZYhdFU8RAz8pAODd43AdNhCYy0gCeDO7xoW RKZlf4yw==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdda-007i17-K4 for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:24:52 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0SS0dntztW9W; Wed, 14 Apr 2021 19:22:32 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:39 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 04/16] arm64/mm: Move the variable lock and tlb_flush_pending to asid_info Date: Wed, 14 Apr 2021 12:23:00 +0100 Message-ID: <20210414112312.13704-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042451_016616_CCCCD46A X-CRM114-Status: GOOD ( 13.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall The variables lock and tlb_flush_pending holds information for a given ASID allocator. So move them to the asid_info structure. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 1fd40a42955c..139ebc161acb 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -17,8 +17,6 @@ #include #include -static DEFINE_RAW_SPINLOCK(cpu_asid_lock); - static struct asid_info { atomic64_t generation; @@ -27,6 +25,9 @@ static struct asid_info atomic64_t __percpu *active; u64 __percpu *reserved; u32 bits; + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; } asid_info; #define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) @@ -34,7 +35,6 @@ static struct asid_info static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -static cpumask_t tlb_flush_pending; static unsigned long max_pinned_asids; static unsigned long nr_pinned_asids; @@ -137,7 +137,7 @@ static void flush_context(struct asid_info *info) * Queue a TLB invalidation for each CPU to perform on next * context-switch */ - cpumask_setall(&tlb_flush_pending); + cpumask_setall(&info->flush_pending); } static bool check_update_reserved_asid(struct asid_info *info, u64 asid, @@ -253,7 +253,7 @@ void check_and_switch_context(struct mm_struct *mm) old_active_asid, asid)) goto switch_mm_fastpath; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if (!asid_gen_match(asid, info)) { @@ -262,11 +262,11 @@ void check_and_switch_context(struct mm_struct *mm) } cpu = smp_processor_id(); - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) local_flush_tlb_all(); atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); switch_mm_fastpath: @@ -289,7 +289,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) if (!pinned_asid_map) return 0; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); asid = atomic64_read(&mm->context.id); @@ -315,7 +315,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) refcount_set(&mm->context.pinned, 1); out_unlock: - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); asid &= ~ASID_MASK(info); @@ -336,14 +336,14 @@ void arm64_mm_context_put(struct mm_struct *mm) if (!pinned_asid_map) return; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); if (refcount_dec_and_test(&mm->context.pinned)) { __clear_bit(asid2idx(info, asid), pinned_asid_map); nr_pinned_asids--; } - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); } EXPORT_SYMBOL_GPL(arm64_mm_context_put); @@ -426,6 +426,7 @@ static int asids_init(void) info->map_idx = 1; info->active = &active_asids; info->reserved = &reserved_asids; + raw_spin_lock_init(&info->lock); pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), sizeof(*pinned_asid_map), GFP_KERNEL); From patchwork Wed Apr 14 11:23:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 045AAC433B4 for ; Wed, 14 Apr 2021 11:27:57 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6509261164 for ; Wed, 14 Apr 2021 11:27:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6509261164 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Cr3ozPL8B259z1qCCvWj8J2JwK/NV7+gYgNig6eLxgc=; b=BXsNJHGGnYpymNVfrJ3ZaH19z 021ceRycnpER+UaTPSY7rytU6Ujv6RexsLsKY3vqrHiG9RXMRiC32lEIVINlFRweCoLV3pngnlbp5 VhYGpHwOj4j5GM12KjaVepJL2YE1qYkJ896Us5LM0IsDGMKaSqq6OFj3rlZm/+0wP04UA4OGCbDul 5PbsFLIp6kLLwpH3k7eULD7ugCfHDG27RUo0JP7bwPb89uwhpdATgWe98MZlm/LkjBgKpaeiTsq2+ Q+2wTAFxb2ZGrBQmzB7egZOgEEP/Vdh9bzGnvKzeCTiiuuGmp6HXE35jdcQ4C7QM7bSU+AL+68W8z WRYUolcMA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdef-00COzC-8P; Wed, 14 Apr 2021 11:25:57 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddh-00COjt-Ra for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:24:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ih+OJLWOl+FYbi6jXZMyfDz7G9VdYKXjRagfws6UGQE=; b=vA4W5xoIuTDqP+DgbUj1pSN3Ir A6lVSMqBHw+k+ObzCi8rp7dTlN/p3qnUOvNLvPymh67Q0TdMoPl+1wFX569pXd/wBAmxBbBfeRyAi x1XRlW12eaBcbyoMM0DB8ci0L31uj4u2pQqvm5XT1Qk9E5Zt5Mow5ygFDTplJ6pm45z6IMsGLFVkP rtsQLPRPS2y+OJJoGfl/UHzNWDHKjIUJ7ywyWq6jZSyzgLD8wtPuipPbc0/ZwHDwQAEPEtmDyy8Kt PeQYvKxJV9q8W2pwgfQy4Uy3hdWgpBEZYzivhjikvPTJPPMi5yHarRLJRsiRp9kNHaeuj/hg04lNU vzDkx+gQ==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddf-007i1R-8S for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:24:56 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FL0SY0Cxlz18Hlm; Wed, 14 Apr 2021 19:22:37 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:43 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 05/16] arm64/mm: Remove dependency on MM in new_context Date: Wed, 14 Apr 2021 12:23:01 +0100 Message-ID: <20210414112312.13704-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042455_495456_66119CA0 X-CRM114-Status: GOOD ( 15.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall The function new_context will be part of a generic ASID allocator. At the moment, the MM structure is currently used to fetch the ASID and pinned refcount. To remove the dependency on MM, it is possible to just pass a pointer to the current ASID and pinned refcount. Also please note that 'pinned' may be NULL if the user doesn't require the pinned asid support. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3-->v4: Changes related to Pinned ASID refcount. --- arch/arm64/mm/context.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 139ebc161acb..628304e0d3b1 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -165,9 +165,10 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid, return hit; } -static u64 new_context(struct asid_info *info, struct mm_struct *mm) +static u64 new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) { - u64 asid = atomic64_read(&mm->context.id); + u64 asid = atomic64_read(pasid); u64 generation = atomic64_read(&info->generation); if (asid != 0) { @@ -185,7 +186,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * takes priority, because even if it is also pinned, we need to * update the generation into the reserved_asids. */ - if (refcount_read(&mm->context.pinned)) + if (pinned && refcount_read(pinned)) return newasid; /* @@ -257,7 +258,7 @@ void check_and_switch_context(struct mm_struct *mm) /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if (!asid_gen_match(asid, info)) { - asid = new_context(info, mm); + asid = new_context(info, &mm->context.id, &mm->context.pinned); atomic64_set(&mm->context.id, asid); } @@ -306,7 +307,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) * We went through one or more rollover since that ASID was * used. Ensure that it is still valid, or generate a new one. */ - asid = new_context(info, mm); + asid = new_context(info, &mm->context.id, &mm->context.pinned); atomic64_set(&mm->context.id, asid); } From patchwork Wed Apr 14 11:23:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D983C433B4 for ; Wed, 14 Apr 2021 11:28:48 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 168BE613C8 for ; Wed, 14 Apr 2021 11:28:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 168BE613C8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8TBUFeaEQLK1C3/Kg7WbXTioy4+bDeVY0IYkyhXGUUI=; b=Qfu25StDKyarV1sjFTr9TFPQ5 03Oyulxmr2acf3/8vMzABNGpUAHRNSY7zkSHeviIjP1TdLhwX766jqU+Cl8sZZg93l/UXaux+2ymt rK2mE3/TDZJ3uOSTPdDruywEpciiI565poKQwQ1G0/uTxAx93fFd9TjYaxAHJDUjUTbWPLGkcCbhe 7pIcF+HUunHavSKAxfBNLh5RiQioCC7nAhIudbTgj1DCcgG6qR3XsQGArk4AFQJtQMV8NblyxBVXd dcjbv2BJb33/VBzsMokHRIx5yB9up/hThcxjWHvTAoFjK7x+CapUEFOnnmHAEeQzvsTugDHHtByn9 4dqizielg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdfC-00CPAb-RK; Wed, 14 Apr 2021 11:26:31 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddq-00COl9-L4 for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1M7zwzlivTWq9usFcQMcCwgoSW1+2fSL3f3eakwqc6w=; b=qMeGMdvZY+PwjjEduY0XoQRUM0 fdynmYZZZOF4wu7OfOYYOxAkbsPrEpd4VQ9YlAKVreNoVwSGX5JoJgboilhEn2as7RIgJemxx/CV7 dfsZ6rpQCprdP+OHa0u/HgMi9bMSnH54lSeRSyM6nj3t4CYqbxdLZ1U4P+FCZH0dx2KGmdqeul9MC fhQBX4Y4j+fJwJhGgeCl7Rno+PF4ZHwRb0uqckvJH7koeWCG+r0Gbfrl487S4Ze9HgthXXVW/O0ez Ny2R0UXx1WWZPG7ebarx8YUW6MZUUKq6vgRrJufpRKGcAk6u/FYfQJjglbgClkpS0MIxw4LwMVpzV KEKiUO+g==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddk-007i21-Fd for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:05 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0Sf33jKztW1h; Wed, 14 Apr 2021 19:22:42 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:47 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 06/16] arm64/mm: Introduce NUM_CTXT_ASIDS Date: Wed, 14 Apr 2021 12:23:02 +0100 Message-ID: <20210414112312.13704-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042504_113444_AEA8CA8D X-CRM114-Status: GOOD ( 15.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall At the moment ASID_FIRST_VERSION is used to know the number of ASIDs supported. As we are going to move the ASID allocator to a separate file, it would be better to use a different name for external users. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3-->v4 -Dropped patch #6, but retained the name NUM_CTXT_ASIDS. --- arch/arm64/mm/context.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 628304e0d3b1..0f11d7c7f6a3 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -41,9 +41,9 @@ static unsigned long nr_pinned_asids; static unsigned long *pinned_asid_map; #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define ASID_FIRST_VERSION(info) (1UL << (info)->bits) +#define NUM_CTXT_ASIDS(info) (1UL << ((info)->bits)) +#define ASID_FIRST_VERSION(info) NUM_CTXT_ASIDS(info) -#define NUM_USER_ASIDS(info) ASID_FIRST_VERSION(info) #define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) #define idx2asid(info, idx) asid2idx(info, idx) @@ -87,7 +87,7 @@ void verify_cpu_asid_bits(void) static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map) { - unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS(info)) * sizeof(unsigned long); + unsigned int len = BITS_TO_LONGS(NUM_CTXT_ASIDS(info)) * sizeof(unsigned long); /* * In case of KPTI kernel/user ASIDs are allocated in * pairs, the bottom bit distinguishes the two: if it @@ -100,11 +100,11 @@ static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map) static void set_reserved_asid_bits(struct asid_info *info) { if (pinned_asid_map) - bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS(info)); + bitmap_copy(info->map, pinned_asid_map, NUM_CTXT_ASIDS(info)); else if (arm64_kernel_unmapped_at_el0()) set_kpti_asid_bits(info, info->map); else - bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); } #define asid_gen_match(asid, info) \ @@ -204,8 +204,8 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid, * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), info->map_idx); - if (asid != NUM_USER_ASIDS(info)) + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx); + if (asid != NUM_CTXT_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ @@ -214,7 +214,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid, flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); @@ -387,7 +387,7 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm) static int asids_update_limit(void) { struct asid_info *info = &asid_info; - unsigned long num_available_asids = NUM_USER_ASIDS(info); + unsigned long num_available_asids = NUM_CTXT_ASIDS(info); if (arm64_kernel_unmapped_at_el0()) { num_available_asids /= 2; @@ -418,18 +418,18 @@ static int asids_init(void) info->bits = get_cpu_asid_bits(); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); info->map_idx = 1; info->active = &active_asids; info->reserved = &reserved_asids; raw_spin_lock_init(&info->lock); - pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*pinned_asid_map), GFP_KERNEL); nr_pinned_asids = 0; From patchwork Wed Apr 14 11:23:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B273EC43460 for ; Wed, 14 Apr 2021 11:27:49 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4418B61164 for ; Wed, 14 Apr 2021 11:27:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4418B61164 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=iffASeKNAlzTSa+PAfYDSztVdd6DzdoQTkLNNcncglA=; b=VWURFNNOEMAzQkqtD/mJ+wzjS /ctdEh2hXA+TxF1SIuhEQPP+SCHb0eJY8aJ1pvk9JoDO4ePymVWSA2AdgcOiTB2z8AdNd9gYbgHMQ QDU+K0xcK4uFHjwS/A8vaMosQ25e6mVFpXnoMpxz0x6/PohW9GPO2TMkjkrqZQF5A+s9pQT9h32Oy uHuty1GiA1O7X1FNb6ndQxwnrQFBLUAW0Hvi9a45tdW0vs10nu3ogI51zVryzgMBmH9DdzfqyIK1M kgh1rM+575WUKM6toe7orRnzSf7ou+chSfg+XQ2VPL78UwsYd14V5ziqZZmblmaWhmYSoXOgz82MD N0GX61mYw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdex-00CP4w-8m; Wed, 14 Apr 2021 11:26:15 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddo-00COkh-Pv for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Mtt5GhoRsc3s0lNtAA75CG1RYSuVF3X7JOAbwrjk/xE=; b=mY0EWbcduBCs/pB/5aORuBcJGz 1BzqVMIXQ1btix1PqrLAVrJmlb/ruEx7hviSdSUUx/C1xWpjZjaJCCMtlx73u22EgYVnI7Kc0Echr cWAXWqw13OUByVJpxXRtusiZ7aR7sQdWujD5kAvt8qqvv79Y1/nUz04ouEiBusribklKf/1WzKX8D aG1b9toYYMAGiu7s4jNSmpuKQ6Q885nfGuO0fbJigx1u4EnUUll0QqE2xx6jexJzW3PO08ORy+hI1 jqQGxR6EmtHf8O3JmoyHlJ0a+wwu8EQXGu+UtwTw6r7Syz4f8KlOC6/dPM+DFVlq64jweL/xXWxKu ZGfU6jqA==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddl-007i22-RL for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:03 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0Sf2hvKztVy2; Wed, 14 Apr 2021 19:22:42 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:52 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 07/16] arm64/mm: Move Pinned ASID related variables to asid_info Date: Wed, 14 Apr 2021 12:23:03 +0100 Message-ID: <20210414112312.13704-8-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042502_236551_11736C5D X-CRM114-Status: GOOD ( 13.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The Pinned ASID variables hold information for a given ASID allocator. So move them to the structure asid_info. Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 0f11d7c7f6a3..8af54e06f5bc 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -28,6 +28,10 @@ static struct asid_info raw_spinlock_t lock; /* Which CPU requires context flush on next call */ cpumask_t flush_pending; + /* Pinned ASIDs info */ + unsigned long *pinned_map; + unsigned long max_pinned_asids; + unsigned long nr_pinned_asids; } asid_info; #define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) @@ -36,10 +40,6 @@ static struct asid_info static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -static unsigned long max_pinned_asids; -static unsigned long nr_pinned_asids; -static unsigned long *pinned_asid_map; - #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) #define NUM_CTXT_ASIDS(info) (1UL << ((info)->bits)) #define ASID_FIRST_VERSION(info) NUM_CTXT_ASIDS(info) @@ -99,8 +99,8 @@ static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map) static void set_reserved_asid_bits(struct asid_info *info) { - if (pinned_asid_map) - bitmap_copy(info->map, pinned_asid_map, NUM_CTXT_ASIDS(info)); + if (info->pinned_map) + bitmap_copy(info->map, info->pinned_map, NUM_CTXT_ASIDS(info)); else if (arm64_kernel_unmapped_at_el0()) set_kpti_asid_bits(info, info->map); else @@ -287,7 +287,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) u64 asid; struct asid_info *info = &asid_info; - if (!pinned_asid_map) + if (!info->pinned_map) return 0; raw_spin_lock_irqsave(&info->lock, flags); @@ -297,7 +297,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) if (refcount_inc_not_zero(&mm->context.pinned)) goto out_unlock; - if (nr_pinned_asids >= max_pinned_asids) { + if (info->nr_pinned_asids >= info->max_pinned_asids) { asid = 0; goto out_unlock; } @@ -311,8 +311,8 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) atomic64_set(&mm->context.id, asid); } - nr_pinned_asids++; - __set_bit(asid2idx(info, asid), pinned_asid_map); + info->nr_pinned_asids++; + __set_bit(asid2idx(info, asid), info->pinned_map); refcount_set(&mm->context.pinned, 1); out_unlock: @@ -334,14 +334,14 @@ void arm64_mm_context_put(struct mm_struct *mm) struct asid_info *info = &asid_info; u64 asid = atomic64_read(&mm->context.id); - if (!pinned_asid_map) + if (!info->pinned_map) return; raw_spin_lock_irqsave(&info->lock, flags); if (refcount_dec_and_test(&mm->context.pinned)) { - __clear_bit(asid2idx(info, asid), pinned_asid_map); - nr_pinned_asids--; + __clear_bit(asid2idx(info, asid), info->pinned_map); + info->nr_pinned_asids--; } raw_spin_unlock_irqrestore(&info->lock, flags); @@ -391,8 +391,8 @@ static int asids_update_limit(void) if (arm64_kernel_unmapped_at_el0()) { num_available_asids /= 2; - if (pinned_asid_map) - set_kpti_asid_bits(info, pinned_asid_map); + if (info->pinned_map) + set_kpti_asid_bits(info, info->pinned_map); } /* * Expect allocation after rollover to fail if we don't have at least @@ -407,7 +407,7 @@ static int asids_update_limit(void) * even if all CPUs have a reserved ASID and the maximum number of ASIDs * are pinned, there still is at least one empty slot in the ASID map. */ - max_pinned_asids = num_available_asids - num_possible_cpus() - 2; + info->max_pinned_asids = num_available_asids - num_possible_cpus() - 2; return 0; } arch_initcall(asids_update_limit); @@ -429,9 +429,9 @@ static int asids_init(void) info->reserved = &reserved_asids; raw_spin_lock_init(&info->lock); - pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*pinned_asid_map), GFP_KERNEL); - nr_pinned_asids = 0; + info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->pinned_map), GFP_KERNEL); + info->nr_pinned_asids = 0; /* * We cannot call set_reserved_asid_bits() here because CPU From patchwork Wed Apr 14 11:23:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 781B2C43460 for ; Wed, 14 Apr 2021 11:28:49 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 015EC613C4 for ; Wed, 14 Apr 2021 11:28:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 015EC613C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=oHqRcOkTgk3qe6b2PJCc6OFx2pv/ynt+dlRVIjM6VlY=; b=REVunnS20gNYirg4BIotKlp8H Lym7u8PAxQEZk/vJJi4Hu3NmGtA5muOE3WNAARXasFP8Ady+EXasnFQ8aZDgF4rPpqZjBlzszxDGw Mel5QZh3f/8h3MU36p2oRHX/LgVYj0D3j+3dYzEgWjZhuOh2+Y2JmUaIbzFTk/oeENn8H59VVadNw annGnT4wNWtH/eEoUaTfJ0rYZcw9rw8o5fjDrQ4twYoFFPpvfG/un45Cu7G+jQilSoVIastyr9QmN tp8rNfZz7xuW7Qgc47syAhNQPEoI2cESiHMk5zoq/2FAU3r1kY2holzCBbIR3sFegVeOtfa7y926r WARnw2dNA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdfR-00CPGw-BA; Wed, 14 Apr 2021 11:26:45 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdds-00COmV-9o for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=8Dck1aGAF9pMfcgXcTXkcWdI6ZRxgpnEF/q2x/fG1Nk=; b=NHP8u/gEU8vUGhV4GlrOL+AHPr 8HD8K+CqgiASfUXqENjUCl4UU1aIhIRCcPeIUYpXMQuk5tZ92R422WrMQ1ZLE1xunZEFqfr45r9bW MmQs0aXxolwvVTBX5bU9rvNLOdCPj603cptCgbA9IRtIM6AlHZHPAhyFmQqw7oSsJ7O32s2ZHZEmD xAYQCNOCpRCqAK0t0xcK0PqCzKxCNcngpSAzrCPYt7UgqFD7l0ktrq0h0JSV+Qkmvv/CdgCG3o4z1 Fr5HP50bPdo3yPNHltEqHeE1u4tZTJB8paDdsG301g98TttPMSujAHX4mzLj8AFbYadlfjSN1H33L nEtoTAlA==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddp-007i3F-Gz for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:07 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FL0Sk5KsszB0mp; Wed, 14 Apr 2021 19:22:46 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:24:56 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 08/16] arm64/mm: Split asid_inits in 2 parts Date: Wed, 14 Apr 2021 12:23:04 +0100 Message-ID: <20210414112312.13704-9-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042505_746548_C35D4A7C X-CRM114-Status: GOOD ( 12.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall Move out the common initialization of the ASID allocator in a separate function. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3-->v4 -dropped asid_per_ctxt and added pinned asid map init. --- arch/arm64/mm/context.c | 44 +++++++++++++++++++++++++++++++---------- 1 file changed, 34 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 8af54e06f5bc..041c3c5e0216 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -412,26 +412,50 @@ static int asids_update_limit(void) } arch_initcall(asids_update_limit); -static int asids_init(void) +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @pinned: Support for Pinned ASIDs + */ +static int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned) { - struct asid_info *info = &asid_info; + info->bits = bits; - info->bits = get_cpu_asid_bits(); + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) - panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_CTXT_ASIDS(info)); + return -ENOMEM; info->map_idx = 1; - info->active = &active_asids; - info->reserved = &reserved_asids; raw_spin_lock_init(&info->lock); - info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->pinned_map), GFP_KERNEL); - info->nr_pinned_asids = 0; + if (pinned) { + info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->pinned_map), GFP_KERNEL); + info->nr_pinned_asids = 0; + } + + return 0; +} + +static int asids_init(void) +{ + struct asid_info *info = &asid_info; + + if (asid_allocator_init(info, get_cpu_asid_bits(), true)) + panic("Unable to initialize ASID allocator for %lu ASIDs\n", + NUM_CTXT_ASIDS(info)); + + info->active = &active_asids; + info->reserved = &reserved_asids; /* * We cannot call set_reserved_asid_bits() here because CPU From patchwork Wed Apr 14 11:23:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A86FDC433ED for ; Wed, 14 Apr 2021 11:29:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F9B3613C8 for ; Wed, 14 Apr 2021 11:29:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F9B3613C8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=04xsZ+w47zLmJu8fRzkEe6+xQ0v9W57wlHb/uPuBhrI=; b=BHw8mw0aJeINbMEbTTw1l9glh 03Y6mNhOT+bI7KIU+XzrNwb1HabShjdYnfrCgbrvK8G/gIOwZZtWGQ8Cwm3xLgyMfcuLge6RbKlll OT2TLh6POPmWH2ReGd8bfG21Wq6DFhCu5D7Y9G6vmmu5fFGa0ao5iIXX7AyTJ4K6XLMujm6AuJO2S oJ3ONH2eDtJ4IsbaE+ghJwRcIaz937YV3Fhxvf4tShnCbkv/tCAAUgm3r+Sc2QCQKQf0cYYRFfKkf fGxGYC3UIv4kvHlgUSQ3vf/aCUaTz5jb7UIn1Dvt+7tWVLg7Kf1SyfpRJL3ypth6Bi3fEdYcxgVOm 8tR+UZ1iw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdfs-00CPSZ-C5; Wed, 14 Apr 2021 11:27:12 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddx-00COnq-Ew for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1KUBSehEABzbFhWQ2fXSchVUTX8EIdLJ6+HFAPBk/TQ=; b=Ry3GDxd0MSLNgej2+5ZXm/7X8X 12eYGLLvXsqvLXhD0wRUAQKSs6gw+YkmMBTmJ+g3twOHRLuwqfJ1w7njHFaNPmM+37OR8ZoiygY5O V4czhaJ8bS0vBsfSKmqhyG7mvst7xpQRTQgmcxaYybdM8P4UwOP7fOPB5eQTBl7K+LMUD4hjjkgXJ XpOAVEfmecFio4mmURCR5QwAhPkJ36G6Pe7+1bC2b1+nd+BYLU/PACil/2hAS38l14mMKz4IGydwS SnPtmddOJlih06qODNeKjd+HrKhBscglfJIbkfz0t+xFgqNXgojzLoaiDpqJP7Q32V1nRkVMxduO6 O8jGFbcg==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWddu-007i3e-Ir for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:12 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0S74W29zNtWp; Wed, 14 Apr 2021 19:22:15 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:00 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Wed, 14 Apr 2021 12:23:05 +0100 Message-ID: <20210414112312.13704-10-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042510_973179_6F3BA29A X-CRM114-Status: GOOD ( 17.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- v3 comment: Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 70 ++++++++++++++++++++++++++++------------- 1 file changed, 48 insertions(+), 22 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 041c3c5e0216..40ef013c90c3 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -222,17 +222,49 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid, return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm) +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @pinned: refcount if asid is pinned. + * Caller needs to make sure preempt is disabled before calling this function. + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) { unsigned long flags; - unsigned int cpu; - u64 asid, old_active_asid; - struct asid_info *info = &asid_info; + u64 asid; + unsigned int cpu = smp_processor_id(); - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if (!asid_gen_match(asid, info)) { + asid = new_context(info, pasid, pinned); + atomic64_set(pasid, asid); + } - asid = atomic64_read(&mm->context.id); + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + local_flush_tlb_all(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @pinned: refcount if asid is pinned + * Caller needs to make sure preempt is disabled before calling this function. + */ +static void asid_check_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -252,24 +284,18 @@ void check_and_switch_context(struct mm_struct *mm) if (old_active_asid && asid_gen_match(asid, info) && atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active), old_active_asid, asid)) - goto switch_mm_fastpath; - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); - if (!asid_gen_match(asid, info)) { - asid = new_context(info, &mm->context.id, &mm->context.pinned); - atomic64_set(&mm->context.id, asid); - } + return; - cpu = smp_processor_id(); - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + asid_new_context(info, pasid, pinned); +} - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); +void check_and_switch_context(struct mm_struct *mm) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, + &mm->context.pinned); arm64_apply_bp_hardening(); From patchwork Wed Apr 14 11:23:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB63DC433B4 for ; Wed, 14 Apr 2021 11:29:32 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5CB3D613C3 for ; Wed, 14 Apr 2021 11:29:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CB3D613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UFG6Wt2krHH8GKfrTbwksy5alkB2tqUYT4Rp6pPpcrU=; b=A0X9ykh6KgWBWgEri5U3/P64V S5HPuXhhY4YlVh5QUG7oY7d6O4UIKM4RgmP/DSVYv2UWq+oftovuBwTxrIMNOcmol6zQiK3aXAThe 5s419vlQLSR94lBsQ7vfEDpdb5tMJbdd5i1+UOMuOUYVnMAdOzZDyOrzimwmAqGxBxZkG2q3Ebsa9 1XIfR+UxE+RDv3iXy8RA3vW1kzl69EE59vf1WK/QwEXQ/vO9YQ+MGd2r0WAszgDu/VzWd8HFp1efz 6wEkethPzEEE2GLw4Bzk9ngEfFwXa/ttK1ENbthamUGjlOVCyy7FFYyVebq4UKp2xWam4d/2we89w zl62FCXGA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdgM-00CPgy-6L; Wed, 14 Apr 2021 11:27:42 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde5-00COpI-Fu for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:22 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=rxwo2BrbSFHCdplYfEyxQVyzMwcqJgIOXqedaFAvHvo=; b=DPqjAGeE9/m29nCnCIdhsUza1W QAv8zRqm5IcQXoPRC/6PeG+uNv1XNvdhPgq+l2Uavt05b6jl7uEnkAQsRsMvfpeGaKRmC1UrN1O/k r/GNQlSmOuK8Ktnp+wHS3wMyOIO2TtT5GS1ORts40PiuQBVVmAacbgT+YevOaIZ2Pdlx5M6LKwowf S5mbJBHSFa9bFsfHruFFYMSOyugwWBvNz3kMvhuIcgwwUqImBu3dsB1Enqc8HWbmBcam/IRuVWuJj K1Fq2/LOJ7Kr5w2DgXo8bR0P7TKwZqJ/6HraUncJcK/qMXvcNObfw9aYpTCsQvxnMIHA0rZ3+GlJs nlm7yQSQ==; Received: from szxga06-in.huawei.com ([45.249.212.32]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde2-007i47-HT for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:20 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FL0TQ3NHCzkjcG; Wed, 14 Apr 2021 19:23:22 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:04 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 10/16] arm64/mm: Split the arm64_mm_context_get/put Date: Wed, 14 Apr 2021 12:23:06 +0100 Message-ID: <20210414112312.13704-11-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042518_937654_5197017A X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Keep only the mm specific part in arm64_mm_context_get/put and move the rest to generic functions. Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 53 +++++++++++++++++++++++++++-------------- 1 file changed, 35 insertions(+), 18 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 40ef013c90c3..901472a57b5d 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -307,20 +307,21 @@ void check_and_switch_context(struct mm_struct *mm) cpu_switch_mm(mm->pgd, mm); } -unsigned long arm64_mm_context_get(struct mm_struct *mm) +static unsigned long asid_context_pinned_get(struct asid_info *info, + atomic64_t *pasid, + refcount_t *pinned) { unsigned long flags; u64 asid; - struct asid_info *info = &asid_info; if (!info->pinned_map) return 0; raw_spin_lock_irqsave(&info->lock, flags); - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); - if (refcount_inc_not_zero(&mm->context.pinned)) + if (refcount_inc_not_zero(pinned)) goto out_unlock; if (info->nr_pinned_asids >= info->max_pinned_asids) { @@ -333,45 +334,61 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) * We went through one or more rollover since that ASID was * used. Ensure that it is still valid, or generate a new one. */ - asid = new_context(info, &mm->context.id, &mm->context.pinned); - atomic64_set(&mm->context.id, asid); + asid = new_context(info, pasid, pinned); + atomic64_set(pasid, asid); } info->nr_pinned_asids++; __set_bit(asid2idx(info, asid), info->pinned_map); - refcount_set(&mm->context.pinned, 1); + refcount_set(pinned, 1); out_unlock: raw_spin_unlock_irqrestore(&info->lock, flags); - asid &= ~ASID_MASK(info); - - /* Set the equivalent of USER_ASID_BIT */ - if (asid && arm64_kernel_unmapped_at_el0()) - asid |= 1; - return asid; } -EXPORT_SYMBOL_GPL(arm64_mm_context_get); -void arm64_mm_context_put(struct mm_struct *mm) +static void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) { unsigned long flags; - struct asid_info *info = &asid_info; - u64 asid = atomic64_read(&mm->context.id); + u64 asid = atomic64_read(pasid); if (!info->pinned_map) return; raw_spin_lock_irqsave(&info->lock, flags); - if (refcount_dec_and_test(&mm->context.pinned)) { + if (refcount_dec_and_test(pinned)) { __clear_bit(asid2idx(info, asid), info->pinned_map); info->nr_pinned_asids--; } raw_spin_unlock_irqrestore(&info->lock, flags); } + +unsigned long arm64_mm_context_get(struct mm_struct *mm) +{ + u64 asid; + struct asid_info *info = &asid_info; + + asid = asid_context_pinned_get(info, &mm->context.id, + &mm->context.pinned); + + /* Set the equivalent of USER_ASID_BIT */ + if (asid && arm64_kernel_unmapped_at_el0()) + asid |= 1; + + return asid; +} +EXPORT_SYMBOL_GPL(arm64_mm_context_get); + +void arm64_mm_context_put(struct mm_struct *mm) +{ + struct asid_info *info = &asid_info; + + asid_context_pinned_put(info, &mm->context.id, &mm->context.pinned); +} EXPORT_SYMBOL_GPL(arm64_mm_context_put); /* Errata workaround post TTBRx_EL1 update. */ From patchwork Wed Apr 14 11:23:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E3D1C433ED for ; Wed, 14 Apr 2021 11:30:07 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E507F613D1 for ; Wed, 14 Apr 2021 11:30:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E507F613D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HgLOlJbsyO2XfTGXLISPLrZZiSZLxQ6QoQwDTjifE30=; b=JHEscG4eoXJaSwzsINXTX0ehx VhdjMju3M/3DyXphIt+kX0HwuBp8Zf+jpC/r7ZjTtOJEL7duS44BWsgyJ/5BDUqQjO3Qvq1Xdo67Z F4T+xbXohJ7nZrvV8Rsh+M9p2w/rd58p/Y9HPiq3vwWUExqwcAEZtfIqQhXg76OWM4An0wb+vm3pn KE3HlAfNojI3Gtl4VdBnczGe7dOfrJjq7q/hyqg1r46mpNim4pk+ZGDlpSePSvaLhwSHWlX36QMzx SPQUGBdpF7bxCY8kj8RtS5+H0zvzLQjKQbDPlpxn8kdAcMlMumWZ5B5HJVlQ8ntUPgy9Nhfz4psIl ZLAS32Ogw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdgw-00CPvc-Up; Wed, 14 Apr 2021 11:28:19 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde8-00COq6-R1 for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=02Y1YBGcyh9hGxZya6ZEclfdqJfbcf1z/euaiPtzX6k=; b=COb8j7aB7+Zwr3xHQmor68T7f9 MGY0Xg+TLGdmwMq82AO0pMOE6VYCApOdAgucFTFpihn4Y2PcuM04yYhDDbkzKm8hDIzqOjUGqCzR4 uiavJpQfm8WXIiaHAFKLb6T03C5v+L+QMd4/VKhXJOPpzVp4MHcTsWpzQI28CRnnPdjxCv6MKgSvS lPCJRq047TZmCs5Cn+kC9OTPrjxDRAd1rYVHw5HhAm4zLjvybkUEyUZd3iExs4ocekCrgll/ZiF1Y /03tJnm3XuwY4U4Q+iqjkKIandMy1/t2ufkKztNUihvf+5Pzf844zeB+jczPS6oY2F//lDK4OwucT iHuHy2PQ==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde4-007i51-Ol for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:23 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0T259cvztWGR; Wed, 14 Apr 2021 19:23:02 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:08 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 11/16] arm64/mm: Introduce a callback to flush the local context Date: Wed, 14 Apr 2021 12:23:07 +0100 Message-ID: <20210414112312.13704-12-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042521_002786_210EA0ED X-CRM114-Status: GOOD ( 12.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall Flushing the local context will vary depending on the actual user of the ASID allocator. Introduce a new callback to flush the local context and move the call to flush local TLB in it. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 901472a57b5d..ee446f7535a3 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -32,6 +32,8 @@ static struct asid_info unsigned long *pinned_map; unsigned long max_pinned_asids; unsigned long nr_pinned_asids; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); } asid_info; #define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) @@ -245,8 +247,9 @@ static void asid_new_context(struct asid_info *info, atomic64_t *pasid, atomic64_set(pasid, asid); } - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) && + info->flush_cpu_ctxt_cb) + info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); @@ -427,6 +430,11 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm) post_ttbr_update_workaround(); } +static void asid_flush_cpu_ctxt(void) +{ + local_flush_tlb_all(); +} + static int asids_update_limit(void) { struct asid_info *info = &asid_info; @@ -499,6 +507,7 @@ static int asids_init(void) info->active = &active_asids; info->reserved = &reserved_asids; + info->flush_cpu_ctxt_cb = asid_flush_cpu_ctxt; /* * We cannot call set_reserved_asid_bits() here because CPU From patchwork Wed Apr 14 11:23:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 216ABC433B4 for ; Wed, 14 Apr 2021 11:29:43 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DE072613C3 for ; Wed, 14 Apr 2021 11:29:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE072613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/DKoKSEZfYrXgE6uLGAAbzFOmiC5mJxoTJSId1sdC4E=; b=KxJhPCHurwHZ/J9GhBKncJpNK hx2zbicdNCfXnwoN+qWB92L3o0fp5I8++hfPAP/jnA1EWu7u8X29QYNeszEDattoay9Xl8qHmpTQP SSoYoi/3z7WFvj/5bhNxWpSVaUGQ1406NTbMyOSDIZr+XhXUUyLrPpqCEzwQ+/b/Xr8eHgEFUltJ8 vVrM/incF6i719hi9cn2eFWPJtYeVC8/yrKXxIlgrfk7Gf310CfmYBjdaV9jZWLxBJxtyTSxU8Nan MJEkCF777y4YlvRLsJICj6HGlSd36wzRWrYI9M9moetOZf+CFsqVLIL6S05qVcwqIB4GfGWI0Dydi 2cxW2snww==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdgc-00CPoC-Nk; Wed, 14 Apr 2021 11:27:58 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde8-00COq2-KC for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=4/QYDyq3ZL7FVkUtBoip3/V7g5wr8eCM7cuwR1YWPJQ=; b=vVvGBVCpp3t8cIP+y9w+u6QxGg jRZmHojshOqQOpihHUQLQXOwkELT9CAU6kuXKBe8mKbaGyLOJbbPQxD7xViqAwfOGlrkRjVMMpT7Z 9xAFK/oZjuc27LHCy0/zyLYMrf443mb8Xp41l09pAU1j/K0ZsmRaGJl/KlX0vFUK1ANKwhH/W8r9J bFJeAjbaZf/GPk9ZQ8auNECGsfPdsIeNjVM5YlqKeRoGayq33kebFkMaB+qJPtG8b1C6mWdHccifd Cf4dr19Q/7K9N6cWGFsZEpUI0zZOudnmRVfHjxgGiNAg0lFsdQGxDwQo0wsYjb4CeVWXgz98Jf5y7 oMglLkVA==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde4-007i50-Mo for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:23 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FL0T24mrTztWFY; Wed, 14 Apr 2021 19:23:02 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:12 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 12/16] arm64/mm: Introduce a callback to set reserved bits Date: Wed, 14 Apr 2021 12:23:08 +0100 Message-ID: <20210414112312.13704-13-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042520_936532_D7BBB9FA X-CRM114-Status: GOOD ( 11.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Setting the reserved asid bits will vary depending on the actual user of the ASID allocator. Introduce a new callback. Signed-off-by: Shameer Kolothum --- arch/arm64/mm/context.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index ee446f7535a3..e9049d14f54a 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -34,6 +34,8 @@ static struct asid_info unsigned long nr_pinned_asids; /* Callback to locally flush the context. */ void (*flush_cpu_ctxt_cb)(void); + /* Callback to set the list of reserved ASIDs */ + void (*set_reserved_bits)(struct asid_info *info); } asid_info; #define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) @@ -118,7 +120,8 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - set_reserved_asid_bits(info); + if (info->set_reserved_bits) + info->set_reserved_bits(info); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -508,6 +511,7 @@ static int asids_init(void) info->active = &active_asids; info->reserved = &reserved_asids; info->flush_cpu_ctxt_cb = asid_flush_cpu_ctxt; + info->set_reserved_bits = set_reserved_asid_bits; /* * We cannot call set_reserved_asid_bits() here because CPU From patchwork Wed Apr 14 11:23:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57A22C433B4 for ; Wed, 14 Apr 2021 11:30:48 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 856D7613C4 for ; Wed, 14 Apr 2021 11:30:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 856D7613C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=O07mtP3ACpe1DcCv7yp2EQE9SVqT2KZd2lLFXYcOEe8=; b=ChmNecwVCheaYNxwfpETUTlSq VGBc0OdGHfXaCQLgdQvLz7BRv7NpPyC3oOOGUb2PXBk89G1UZPq6w8FQbbkXWopcKKnOOeTZeZQC+ lfezjPrXkRn9lo7C2Gcps0i4lhs+hr7gk3BcMI/hPIyLDIDOOWAstFh9skj0hfCB9vqJ0bNQ9mapv EOp9/jbrPbfatwWjvcqFPpDBRc0ZhoBmySNJ0CjdLMqQ5Dc3+gBL2q/VeqqBzqIjwueRcEcHKtY3/ 8/VqZNV4Ut2twzPVnErlOrSP4RDXSIxEey7N6oSH/DtsAZzmEV3wWSba9oqQKRtwpMvz1eyw8wHQh eLYNMwldw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdhS-00CQ9E-5D; Wed, 14 Apr 2021 11:28:50 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeE-00COra-8W for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Ex75H+y1XHuxDi1sASp88H/p2f4eLTUjp32Q7AsejDQ=; b=wN46NO/M3xs97kMViFxHhx1/B7 13OvMnrZcjnSt2iVqrSsYrQcWj5Dn+BABp2LPGENDSa/eOvb1g6A34DIkOWj8RnWo0e6fbUw+Kyvn 5sPBwm055BnmU2Xajr9gwz4VNmuQQ+LyxmvjLz0mKTq8GHq0PLCA5/73ocV9+oznJ90+CMa1WAv5f nG0nnlrpiuf3TA5IR3kPpI/DbLU9wqVXkwNuyr3nrrMaBQ8/LvpT21S6wBwobtNtXupWqzojPDxOB dmdrUjdPzGSTqZOQHXI8zmiM0hepqgGsbfw3sOMAar6r0XeUhrlPFikbZg883wRBErEDHduzRYHaP /ONi9KIA==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWde9-007i6D-Vx for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:29 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FL0T70FnWzB0Xw; Wed, 14 Apr 2021 19:23:07 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:16 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 13/16] arm64: Move the ASID allocator code in a separate file Date: Wed, 14 Apr 2021 12:23:09 +0100 Message-ID: <20210414112312.13704-14-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042526_408997_9A309E4F X-CRM114-Status: GOOD ( 38.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall We will want to re-use the ASID allocator in a separate context (e.g allocating VMID). So move the code in a new file. The function asid_check_context has been moved in the header as a static inline function because we want to avoid add a branch when checking if the ASID is still valid. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/lib_asid.h | 85 ++++++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/asid.c | 258 +++++++++++++++++++++++++ arch/arm64/mm/context.c | 310 +----------------------------- 4 files changed, 347 insertions(+), 308 deletions(-) create mode 100644 arch/arm64/include/asm/lib_asid.h create mode 100644 arch/arm64/lib/asid.c diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h new file mode 100644 index 000000000000..acae8d243d17 --- /dev/null +++ b/arch/arm64/include/asm/lib_asid.h @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_LIB_ASID_H +#define __ASM_ASM_LIB_ASID_H + +#include +#include +#include +#include +#include + +struct asid_info { + atomic64_t generation; + unsigned long *map; + unsigned int map_idx; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Pinned ASIDs info */ + unsigned long *pinned_map; + unsigned long max_pinned_asids; + unsigned long nr_pinned_asids; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); + /* Callback to set the list of reserved ASIDs */ + void (*set_reserved_bits)(struct asid_info *info); +}; + +#define NUM_CTXT_ASIDS(info) (1UL << ((info)->bits)) + +#define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) +#define asid_gen_match(asid, info) \ + (!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits)) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned, unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @pinned: refcount if asid is pinned + */ +static inline void asid_check_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) +{ + unsigned int cpu; + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(this_cpu_ptr(info->active)); + if (old_active_asid && asid_gen_match(asid, info) && + atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active), + old_active_asid, asid)) + return; + + cpu = smp_processor_id(); + asid_new_context(info, pasid, pinned, cpu); +} + +unsigned long asid_context_pinned_get(struct asid_info *info, + atomic64_t *pasid, + refcount_t *pinned); +void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned); +int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned); + +#endif diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index d31e1169d9b8..d42c66ce0460 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memset.o memcmp.o strcmp.o strncmp.o strlen.o \ strnlen.o strchr.o strrchr.o tishift.o +lib-y += asid.o + ifeq ($(CONFIG_KERNEL_MODE_NEON), y) obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c new file mode 100644 index 000000000000..286285616f65 --- /dev/null +++ b/arch/arm64/lib/asid.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ASID allocator. + * + * Based on arch/arm/mm/context.c + * + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include + +#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu)) + +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) NUM_CTXT_ASIDS(info) + +#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) +#define idx2asid(info, idx) asid2idx(info, idx) + +static void flush_context(struct asid_info *info) +{ + int i; + u64 asid; + + /* Update the list of reserved ASIDs and the ASID bitmap. */ + if (info->set_reserved_bits) + info->set_reserved_bits(info); + + for_each_possible_cpu(i) { + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = reserved_asid(info, i); + __set_bit(asid2idx(info, asid), info->map); + reserved_asid(info, i) = asid; + } + + /* + * Queue a TLB invalidation for each CPU to perform on next + * context-switch + */ + cpumask_setall(&info->flush_pending); +} + +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) +{ + int cpu; + bool hit = false; + + /* + * Iterate over the set of reserved ASIDs looking for a match. + * If we find one, then we can update our mm to use newasid + * (i.e. the same ASID in the current generation) but we can't + * exit the loop early, since we need to ensure that all copies + * of the old ASID are updated to reflect the mm. Failure to do + * so could result in us missing the reserved ASID in a future + * generation. + */ + for_each_possible_cpu(cpu) { + if (reserved_asid(info, cpu) == asid) { + hit = true; + reserved_asid(info, cpu) = newasid; + } + } + + return hit; +} + +static u64 new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) +{ + u64 asid = atomic64_read(pasid); + u64 generation = atomic64_read(&info->generation); + + if (asid != 0) { + u64 newasid = generation | (asid & ~ASID_MASK(info)); + + /* + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. + */ + if (check_update_reserved_asid(info, asid, newasid)) + return newasid; + + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_asids. + */ + if (pinned && refcount_read(pinned)) + return newasid; + + /* + * We had a valid ASID in a previous life, so try to re-use + * it if possible. + */ + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) + return newasid; + } + + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. + */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx); + if (asid != NUM_CTXT_ASIDS(info)) + goto set_asid; + + /* We're out of ASIDs, so increment the global generation count */ + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), + &info->generation); + flush_context(info); + + /* We have more ASIDs than CPUs, so this will always succeed */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); + +set_asid: + __set_bit(asid, info->map); + info->map_idx = asid; + return idx2asid(info, asid) | generation; +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @pinned: refcount if asid is pinned + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned, unsigned int cpu) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if (!asid_gen_match(asid, info)) { + asid = new_context(info, pasid, pinned); + atomic64_set(pasid, asid); + } + + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) && + info->flush_cpu_ctxt_cb) + info->flush_cpu_ctxt_cb(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +unsigned long asid_context_pinned_get(struct asid_info *info, + atomic64_t *pasid, + refcount_t *pinned) +{ + unsigned long flags; + u64 asid; + + if (!info->pinned_map) + return 0; + + raw_spin_lock_irqsave(&info->lock, flags); + + asid = atomic64_read(pasid); + + if (refcount_inc_not_zero(pinned)) + goto out_unlock; + + if (info->nr_pinned_asids >= info->max_pinned_asids) { + asid = 0; + goto out_unlock; + } + + if (!asid_gen_match(asid, info)) { + /* + * We went through one or more rollover since that ASID was + * used. Ensure that it is still valid, or generate a new one. + */ + asid = new_context(info, pasid, pinned); + atomic64_set(pasid, asid); + } + + info->nr_pinned_asids++; + __set_bit(asid2idx(info, asid), info->pinned_map); + refcount_set(pinned, 1); + +out_unlock: + raw_spin_unlock_irqrestore(&info->lock, flags); + asid &= ~ASID_MASK(info); + return asid; +} + +void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid, + refcount_t *pinned) +{ + unsigned long flags; + u64 asid = atomic64_read(pasid); + + if (!info->pinned_map) + return; + + raw_spin_lock_irqsave(&info->lock, flags); + + if (refcount_dec_and_test(pinned)) { + __clear_bit(asid2idx(info, asid), info->pinned_map); + info->nr_pinned_asids--; + } + + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @pinned: Support for Pinned ASIDs + */ +int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned) +{ + info->bits = bits; + + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); + if (!info->map) + return -ENOMEM; + + info->map_idx = 1; + raw_spin_lock_init(&info->lock); + + if (pinned) { + info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->pinned_map), GFP_KERNEL); + info->nr_pinned_asids = 0; + } + + return 0; +} diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index e9049d14f54a..f44e08981841 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -13,43 +13,15 @@ #include #include +#include #include #include #include -static struct asid_info -{ - atomic64_t generation; - unsigned long *map; - unsigned int map_idx; - atomic64_t __percpu *active; - u64 __percpu *reserved; - u32 bits; - raw_spinlock_t lock; - /* Which CPU requires context flush on next call */ - cpumask_t flush_pending; - /* Pinned ASIDs info */ - unsigned long *pinned_map; - unsigned long max_pinned_asids; - unsigned long nr_pinned_asids; - /* Callback to locally flush the context. */ - void (*flush_cpu_ctxt_cb)(void); - /* Callback to set the list of reserved ASIDs */ - void (*set_reserved_bits)(struct asid_info *info); -} asid_info; - -#define active_asid(info, cpu) (*per_cpu_ptr((info)->active, cpu)) -#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu)) - static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define NUM_CTXT_ASIDS(info) (1UL << ((info)->bits)) -#define ASID_FIRST_VERSION(info) NUM_CTXT_ASIDS(info) - -#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) -#define idx2asid(info, idx) asid2idx(info, idx) +static struct asid_info asid_info; /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -111,190 +83,6 @@ static void set_reserved_asid_bits(struct asid_info *info) bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); } -#define asid_gen_match(asid, info) \ - (!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits)) - -static void flush_context(struct asid_info *info) -{ - int i; - u64 asid; - - /* Update the list of reserved ASIDs and the ASID bitmap. */ - if (info->set_reserved_bits) - info->set_reserved_bits(info); - - for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); - /* - * If this CPU has already been through a - * rollover, but hasn't run another task in - * the meantime, we must preserve its reserved - * ASID, as this is the only trace we have of - * the process it is still running. - */ - if (asid == 0) - asid = reserved_asid(info, i); - __set_bit(asid2idx(info, asid), info->map); - reserved_asid(info, i) = asid; - } - - /* - * Queue a TLB invalidation for each CPU to perform on next - * context-switch - */ - cpumask_setall(&info->flush_pending); -} - -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, - u64 newasid) -{ - int cpu; - bool hit = false; - - /* - * Iterate over the set of reserved ASIDs looking for a match. - * If we find one, then we can update our mm to use newasid - * (i.e. the same ASID in the current generation) but we can't - * exit the loop early, since we need to ensure that all copies - * of the old ASID are updated to reflect the mm. Failure to do - * so could result in us missing the reserved ASID in a future - * generation. - */ - for_each_possible_cpu(cpu) { - if (reserved_asid(info, cpu) == asid) { - hit = true; - reserved_asid(info, cpu) = newasid; - } - } - - return hit; -} - -static u64 new_context(struct asid_info *info, atomic64_t *pasid, - refcount_t *pinned) -{ - u64 asid = atomic64_read(pasid); - u64 generation = atomic64_read(&info->generation); - - if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK(info)); - - /* - * If our current ASID was active during a rollover, we - * can continue to use it and this was just a false alarm. - */ - if (check_update_reserved_asid(info, asid, newasid)) - return newasid; - - /* - * If it is pinned, we can keep using it. Note that reserved - * takes priority, because even if it is also pinned, we need to - * update the generation into the reserved_asids. - */ - if (pinned && refcount_read(pinned)) - return newasid; - - /* - * We had a valid ASID in a previous life, so try to re-use - * it if possible. - */ - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) - return newasid; - } - - /* - * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. We - * always count from ASID #2 (index 1), as we use ASID #0 when setting - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd - * pairs. - */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx); - if (asid != NUM_CTXT_ASIDS(info)) - goto set_asid; - - /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), - &info->generation); - flush_context(info); - - /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); - -set_asid: - __set_bit(asid, info->map); - info->map_idx = asid; - return idx2asid(info, asid) | generation; -} - -/* - * Generate a new ASID for the context. - * - * @pasid: Pointer to the current ASID batch allocated. It will be updated - * with the new ASID batch. - * @pinned: refcount if asid is pinned. - * Caller needs to make sure preempt is disabled before calling this function. - */ -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - refcount_t *pinned) -{ - unsigned long flags; - u64 asid; - unsigned int cpu = smp_processor_id(); - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(pasid); - if (!asid_gen_match(asid, info)) { - asid = new_context(info, pasid, pinned); - atomic64_set(pasid, asid); - } - - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) && - info->flush_cpu_ctxt_cb) - info->flush_cpu_ctxt_cb(); - - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); -} - -/* - * Check the ASID is still valid for the context. If not generate a new ASID. - * - * @pasid: Pointer to the current ASID batch - * @pinned: refcount if asid is pinned - * Caller needs to make sure preempt is disabled before calling this function. - */ -static void asid_check_context(struct asid_info *info, atomic64_t *pasid, - refcount_t *pinned) -{ - u64 asid, old_active_asid; - - asid = atomic64_read(pasid); - - /* - * The memory ordering here is subtle. - * If our active_asid is non-zero and the ASID matches the current - * generation, then we update the active_asid entry with a relaxed - * cmpxchg. Racing with a concurrent rollover means that either: - * - * - We get a zero back from the cmpxchg and end up waiting on the - * lock. Taking the lock synchronises with the rollover and so - * we are forced to see the updated generation. - * - * - We get a valid ASID back from the cmpxchg, which means the - * relaxed xchg in flush_context will treat us as reserved - * because atomic RmWs are totally ordered for a given location. - */ - old_active_asid = atomic64_read(this_cpu_ptr(info->active)); - if (old_active_asid && asid_gen_match(asid, info) && - atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active), - old_active_asid, asid)) - return; - - asid_new_context(info, pasid, pinned); -} - void check_and_switch_context(struct mm_struct *mm) { if (system_supports_cnp()) @@ -313,66 +101,6 @@ void check_and_switch_context(struct mm_struct *mm) cpu_switch_mm(mm->pgd, mm); } -static unsigned long asid_context_pinned_get(struct asid_info *info, - atomic64_t *pasid, - refcount_t *pinned) -{ - unsigned long flags; - u64 asid; - - if (!info->pinned_map) - return 0; - - raw_spin_lock_irqsave(&info->lock, flags); - - asid = atomic64_read(pasid); - - if (refcount_inc_not_zero(pinned)) - goto out_unlock; - - if (info->nr_pinned_asids >= info->max_pinned_asids) { - asid = 0; - goto out_unlock; - } - - if (!asid_gen_match(asid, info)) { - /* - * We went through one or more rollover since that ASID was - * used. Ensure that it is still valid, or generate a new one. - */ - asid = new_context(info, pasid, pinned); - atomic64_set(pasid, asid); - } - - info->nr_pinned_asids++; - __set_bit(asid2idx(info, asid), info->pinned_map); - refcount_set(pinned, 1); - -out_unlock: - raw_spin_unlock_irqrestore(&info->lock, flags); - asid &= ~ASID_MASK(info); - return asid; -} - -static void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid, - refcount_t *pinned) -{ - unsigned long flags; - u64 asid = atomic64_read(pasid); - - if (!info->pinned_map) - return; - - raw_spin_lock_irqsave(&info->lock, flags); - - if (refcount_dec_and_test(pinned)) { - __clear_bit(asid2idx(info, asid), info->pinned_map); - info->nr_pinned_asids--; - } - - raw_spin_unlock_irqrestore(&info->lock, flags); -} - unsigned long arm64_mm_context_get(struct mm_struct *mm) { u64 asid; @@ -466,40 +194,6 @@ static int asids_update_limit(void) } arch_initcall(asids_update_limit); -/* - * Initialize the ASID allocator - * - * @info: Pointer to the asid allocator structure - * @bits: Number of ASIDs available - * @pinned: Support for Pinned ASIDs - */ -static int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned) -{ - info->bits = bits; - - /* - * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is always reserved. - */ - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->map), GFP_KERNEL); - if (!info->map) - return -ENOMEM; - - info->map_idx = 1; - raw_spin_lock_init(&info->lock); - - if (pinned) { - info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->pinned_map), GFP_KERNEL); - info->nr_pinned_asids = 0; - } - - return 0; -} - static int asids_init(void) { struct asid_info *info = &asid_info; From patchwork Wed Apr 14 11:23:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7081C433ED for ; Wed, 14 Apr 2021 11:31:41 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C00361008 for ; Wed, 14 Apr 2021 11:31:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C00361008 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=x8EHX4m+SINgk31oIrh8NJ5zcwya+YAMCVsK61rmoSA=; b=jOgdDtipKUkQgerC5OJrK614K 0vH/cR7F7P3qkoxNzL6ByX70sxso3A9cMg5LxgNQiO3odL3jD529hq/5fASINDtuFXEtVVU9OX38y TcC1JJA3WmS7eJj6MKfHvmj49/IAf+Rt5ro/CcLnIwcUWw9jA6qrKcbFvambxtUOGVukenqk54IOj dCrQmtlYNlDZT8YO/Sf6s8QdMxYS+K/LSK3TrGix3JECUj1K/rhLVqkqhbkvYLb6iqFOxNwf+q3O5 b7HFIkMxOeHpQeCftTAHvKriLtA5B+zb1loyGVuY8fNCCjxhHm2gRkIt9kyQbY7vIOBjYfA3q8C01 r5//8+07A==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdht-00CQKb-CU; Wed, 14 Apr 2021 11:29:18 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeH-00COsG-9y for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=cRUqwrx3CzAgFx8cjiVlT9uXBRMOksMCWegEaWG1iHE=; b=KPUiV6CirvbQhW6/TKEkut/cYp KrDxwUXBK9XpT7J37jVpztHDhm2D230WLBzq631W+UxGHrpbPk6jwa0rPtdJeaME5e2l8LJ3wAa4q ZVy1ZB8mVFnFXDxqxTZmdKdm1075+hd4K7OJ9By5Z7CMV2l2FEjAnbDaeNwu0n0Xe+7sxVFsPWFVG OEV+7xmmp7oCbXN3aATKP96XLU/URDaRd1c2N1tcl9YJ0AFVt88zUcj0NApJ7rF5GkusppT+kLc7F RpkCK5nDpdVmZFCgkTEjE8aBDyhx72F7muzgRw8bPGFcSSDDD86FADtFcA3YiVjocc0ApgbiHqd3t ET8vHKlw==; Received: from szxga06-in.huawei.com ([45.249.212.32]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeE-007i6v-Pv for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:32 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FL0Tj4nTczkjyc; Wed, 14 Apr 2021 19:23:37 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:20 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 14/16] arm64/lib: Add an helper to free memory allocated by the ASID allocator Date: Wed, 14 Apr 2021 12:23:10 +0100 Message-ID: <20210414112312.13704-15-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042531_019343_44017042 X-CRM114-Status: UNSURE ( 8.20 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall Some users of the ASID allocator (e.g VMID) may need to free any resources if the initialization fail. So introduce a function that allows freeing of any memory allocated by the ASID allocator. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/lib_asid.h | 2 ++ arch/arm64/lib/asid.c | 6 ++++++ 2 files changed, 8 insertions(+) diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h index acae8d243d17..4dbc0a3f19a6 100644 --- a/arch/arm64/include/asm/lib_asid.h +++ b/arch/arm64/include/asm/lib_asid.h @@ -82,4 +82,6 @@ void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid, refcount_t *pinned); int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned); +void asid_allocator_free(struct asid_info *info); + #endif diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c index 286285616f65..7bd031f9516a 100644 --- a/arch/arm64/lib/asid.c +++ b/arch/arm64/lib/asid.c @@ -256,3 +256,9 @@ int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned) return 0; } + +void asid_allocator_free(struct asid_info *info) +{ + kfree(info->map); + kfree(info->pinned_map); +} From patchwork Wed Apr 14 11:23:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A5E6C433B4 for ; Wed, 14 Apr 2021 11:32:12 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D552A61154 for ; Wed, 14 Apr 2021 11:32:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D552A61154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=by7qK8WP7g/xZruBlXshaarMiWDUU1F2eyRUYk4Le+M=; b=PLsdm501S2xy1oBS5Z3nOyDOr JQVP1q7euhYaqVd8B/Xp6OX6zt/uA0UtoPm/+FA99fOEyt1NDsVdNK5Rojwh47kia8b9bZytCpLzT o4+4k+DEGgjGbWp/KdJBM4ISkBKZu4tRYy69esKLBCb+iixgC4gdgLTFcNe3RMNqaGAcWNdguZxVE EDT5zUvEvoADcA/8P2JGmUbu2KjFfiwY4WWwAZ6qke9KA+MKf+KAzms/RZhBJDgGvMN2pZA200tfb Icvqg0XvJrby201IGAEe0gQBcMMGgkuer5DWFMuXCfGp6I8JiiqEHpaYq6nVc2u0ZK38TVBlhE/59 rqTRnf2ZQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdiU-00CQY5-1d; Wed, 14 Apr 2021 11:29:54 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeM-00COuA-JQ for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=chidrp1i6rJfKpGZQQDEulitKv+6HiQGcTOTDM5DO/E=; b=Lcundy3rZ5kM0hj+mmcabWusVL IfWpj+iMaeLNLKpFGzcnLT9in6JxkU7q4wsSD867YCUtSprMszPpEnxnH5UvvXF5v4yERcgbkuldU j+BV8rHTTVHbvqn2txrNpV0AgqEhrDMQcXL4H7nEulJwSMeQm5IdPMhkI+IBenbTWtbfV4XKSijwQ 8yUB/c6YOkCeGFpdX/t188agBNlKUpXlV2kERERELfOo05b1cBW13J0GwoIOKtFbilwFSIDz/OAD7 uOgzzaPT1kIxoadaEf61LwtD4bhObh3XuIDPX6V9A3NFhKe89dCLrVqFxaEdoBOTnsrQrWFAElSdo V49V76bA==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeJ-007i7h-UF for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:37 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FL0TK1HXvzB0mw; Wed, 14 Apr 2021 19:23:17 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:24 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 15/16] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available Date: Wed, 14 Apr 2021 12:23:11 +0100 Message-ID: <20210414112312.13704-16-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042536_166958_3FBAE6FA X-CRM114-Status: GOOD ( 12.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall At the moment, the function kvm_get_vmid_bits() is looking up for the sanitized value of ID_AA64MMFR1_EL1 and extract the information regarding the number of VMID bits supported. This is fine as the function is mainly used during VMID roll-over. New use in a follow-up patch will require the function to be called a every context switch so we want the function to be more efficient. A new capability is introduced to tell whether 16-bit VMID is available. Signed-off-by: Julien Grall --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/kvm_mmu.h | 4 +--- arch/arm64/kernel/cpufeature.c | 9 +++++++++ 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index c40f2490cd7b..acb92da5c254 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -67,7 +67,8 @@ #define ARM64_HAS_LDAPR 59 #define ARM64_KVM_PROTECTED_MODE 60 #define ARM64_WORKAROUND_NVIDIA_CARMEL_CNP 61 +#define ARM64_HAS_16BIT_VMID 62 -#define ARM64_NCAPS 62 +#define ARM64_NCAPS 63 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 90873851f677..c3080966ef83 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -213,9 +213,7 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); static inline unsigned int kvm_get_vmid_bits(void) { - int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); - - return get_vmid_bits(reg); + return cpus_have_const_cap(ARM64_HAS_16BIT_VMID) ? 16 : 8; } /* diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e5281e1c8f1d..ff956fb2f712 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2203,6 +2203,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .min_field_value = 1, }, + { + .capability = ARM64_HAS_16BIT_VMID, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64MMFR1_EL1, + .field_pos = ID_AA64MMFR1_VMIDBITS_SHIFT, + .sign = FTR_UNSIGNED, + .min_field_value = ID_AA64MMFR1_VMIDBITS_16, + .matches = has_cpuid_feature, + }, {}, }; From patchwork Wed Apr 14 11:23:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 12202425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2230C433ED for ; Wed, 14 Apr 2021 11:32:26 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 602BF61008 for ; Wed, 14 Apr 2021 11:32:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 602BF61008 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xM2yOx8aeag6qxgCP1paYbzGWEaoroJpuh1kc+soHd8=; b=HtyUFkr4WaY4psCBvWXhhFOsc +GgpHWQviCZuwsWFYtaaZA+TBHIZp/GOovEin/uGZX3ds5FLwdhAHZCZO41xBVvqKZYivIxi13shL mgI45eUBlEzJICSxsHgnofuFP+5toHsyqjkAmkrL2xCsWOtLgqQE//OXc2qZy/AAqT4m0teaRhh2/ J9yR9aEaxgymDNfDK3a/cKOg/1gL94s6wyJvWogjWGitGa8bzw4yHTrfK6VhOPULNKNQr48fExe9g I1rXMXcoIE5KpaAYuvL8/K9VrA/b/x+V8HeTH1RvetPt/owPnRgrOkUDIO5UgC5m3ySp3I+myTok7 T2IVsBCzw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWdix-00CQkY-LO; Wed, 14 Apr 2021 11:30:24 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeS-00COwC-Jp for linux-arm-kernel@desiato.infradead.org; Wed, 14 Apr 2021 11:25:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3qe/bX6rF1tTP94GlVWc4rHkUQGmIOANrauv/06GmcA=; b=kWkTTSakUQAaeXTxbY/akMRK7A Lck1XIHuso4u5bj9axrmdO/3ADo4wdiCm7yMXwqR9LAfD6oLZEB90httwMaPPJVu6p3OZFgL61g8Z JjjxQ6YeTL8qljxUcqowdLjMutX9A5FTZT+HzK71Hi2NbX7HcEbTwRGGFSfl4tGi0jWxZ4gNN15AO NDNQ5+VTGEXtdS1DibyPuaHokhHMabxqhyPWp9mmguOYFpFXowM5n+b4vxA+3g8P+RDtd4Pf5cGO8 U785TjtGAkYZOuVQK94d9doxK0QxPZSt7QKrf/6jQfgy+boHBFgCnHmDt8a/0ICtO4uAeZ9m6SZlK ESG2gtIg==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWdeP-007i8Z-1F for linux-arm-kernel@lists.infradead.org; Wed, 14 Apr 2021 11:25:43 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FL0TQ5q4xz18Hbh; Wed, 14 Apr 2021 19:23:22 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.82.32) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 19:25:28 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , , Subject: [PATCH v4 16/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Date: Wed, 14 Apr 2021 12:23:12 +0100 Message-ID: <20210414112312.13704-17-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> References: <20210414112312.13704-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.82.32] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210414_042541_450275_114700FE X-CRM114-Status: GOOD ( 36.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Julien Grall At the moment, the VMID algorithm will send an SGI to all the CPUs to force an exit and then broadcast a full TLB flush and I-Cache invalidation. This patch re-use the new ASID allocator. The benefits are: - CPUs are not forced to exit at roll-over. Instead the VMID will be marked reserved and the context will be flushed at next exit. This will reduce the IPIs traffic. - Context invalidation is now per-CPU rather than broadcasted. - Catalin has a formal model of the ASID allocator. With the new algo, the code is now adapted: - The function __kvm_flush_vm_context() has been renamed to __kvm_tlb_flush_local_all() and now only flushing the current CPU context. - The call to update_vmid() will be done with preemption disabled as the new algo requires to store information per-CPU. - The TLBs associated to EL1 will be flushed when booting a CPU to deal with stale information. This was previously done on the allocation of the first VMID of a new generation. Signed-off-by: Julien Grall Signed-off-by: Shameer Kolothum --- Test Results: v4: The measurement was made on a HiSilicon D06 platform with maxcpus set to 8 and with the number of VMID limited to 4-bit. The test involves running concurrently 40 guests with 2 vCPUs. Each guest will then execute hackbench 5 times before exiting. The performance difference between the current algo and the new one are (avg. of 10 runs): - 1.9% less entry/exit from guest - 0.7% faster v3: The measurement was made on a Seattle based SoC (8 CPUs), with the number of VMID limited to 4-bit. The test involves running concurrently 40 guests with 2 vCPUs. Each guest will then execute hackbench 5 times before exiting. The performance difference between the current algo and the new one are: - 2.5% less exit from the guest - 22.4% more flush, although they are now local rather than broadcasted - 0.11% faster (just for the record) --- arch/arm64/include/asm/kvm_asm.h | 4 +- arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/include/asm/kvm_mmu.h | 3 +- arch/arm64/kvm/arm.c | 124 +++++++++++------------------ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 10 +-- arch/arm64/kvm/hyp/vhe/tlb.c | 10 +-- arch/arm64/kvm/mmu.c | 1 - 8 files changed, 65 insertions(+), 98 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a7ab84f781f7..29697c5ab2c2 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -44,7 +44,7 @@ #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init 0 #define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run 1 -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context 2 +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_all 2 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa 3 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid 4 #define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context 5 @@ -182,7 +182,7 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end); DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs) -extern void __kvm_flush_vm_context(void); +extern void __kvm_tlb_flush_local_all(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3d10e6527f7d..5309216e4a94 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -70,9 +70,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; - u32 vmid; + atomic64_t id; }; struct kvm_s2_mmu { @@ -631,7 +629,6 @@ void kvm_arm_resume_guest(struct kvm *kvm); ret; \ }) -void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); int handle_exit(struct kvm_vcpu *vcpu, int exception_index); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index c3080966ef83..43e83df87e3a 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -252,7 +252,8 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0; baddr = mmu->pgd_phys; - vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT; + vmid_field = atomic64_read(&vmid->id) << VTTBR_VMID_SHIFT; + vmid_field &= VTTBR_VMID_MASK(kvm_get_vmid_bits()); return kvm_phys_to_vttbr(baddr) | vmid_field | cnp; } diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7f06ba76698d..c63242db2d42 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -55,10 +56,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); -/* The VMID used in the VTTBR */ -static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); -static u32 kvm_next_vmid; -static DEFINE_SPINLOCK(kvm_vmid_lock); +static DEFINE_PER_CPU(atomic64_t, active_vmids); +static DEFINE_PER_CPU(u64, reserved_vmids); + +static struct asid_info vmid_info; static bool vgic_present; @@ -486,85 +487,22 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) return vcpu_mode_priv(vcpu); } -/* Just ensure a guest exit from a particular CPU */ -static void exit_vm_noop(void *info) -{ -} - -void force_vm_exit(const cpumask_t *mask) +static void vmid_flush_cpu_ctxt(void) { - preempt_disable(); - smp_call_function_many(mask, exit_vm_noop, NULL, true); - preempt_enable(); + kvm_call_hyp(__kvm_tlb_flush_local_all); } -/** - * need_new_vmid_gen - check that the VMID is still valid - * @vmid: The VMID to check - * - * return true if there is a new generation of VMIDs being used - * - * The hardware supports a limited set of values with the value zero reserved - * for the host, so we check if an assigned value belongs to a previous - * generation, which requires us to assign a new value. If we're the first to - * use a VMID for the new generation, we must flush necessary caches and TLBs - * on all CPUs. - */ -static bool need_new_vmid_gen(struct kvm_vmid *vmid) +static void vmid_set_reserved_bits(struct asid_info *info) { - u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); - smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ - return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen); + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); } - /** * update_vmid - Update the vmid with a valid VMID for the current generation * @vmid: The stage-2 VMID information struct */ static void update_vmid(struct kvm_vmid *vmid) { - if (!need_new_vmid_gen(vmid)) - return; - - spin_lock(&kvm_vmid_lock); - - /* - * We need to re-check the vmid_gen here to ensure that if another vcpu - * already allocated a valid vmid for this vm, then this vcpu should - * use the same vmid. - */ - if (!need_new_vmid_gen(vmid)) { - spin_unlock(&kvm_vmid_lock); - return; - } - - /* First user of a new VMID generation? */ - if (unlikely(kvm_next_vmid == 0)) { - atomic64_inc(&kvm_vmid_gen); - kvm_next_vmid = 1; - - /* - * On SMP we know no other CPUs can use this CPU's or each - * other's VMID after force_vm_exit returns since the - * kvm_vmid_lock blocks them from reentry to the guest. - */ - force_vm_exit(cpu_all_mask); - /* - * Now broadcast TLB + ICACHE invalidation over the inner - * shareable domain to make sure all data structures are - * clean. - */ - kvm_call_hyp(__kvm_flush_vm_context); - } - - vmid->vmid = kvm_next_vmid; - kvm_next_vmid++; - kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; - - smp_wmb(); - WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen)); - - spin_unlock(&kvm_vmid_lock); + asid_check_context(&vmid_info, &vmid->id, NULL); } static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) @@ -728,8 +666,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ cond_resched(); - update_vmid(&vcpu->arch.hw_mmu->vmid); - check_vcpu_requests(vcpu); /* @@ -739,6 +675,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ preempt_disable(); + /* + * The ASID/VMID allocator only tracks active VMIDs per + * physical CPU, and therefore the VMID allocated may not be + * preserved on VMID roll-over if the task was preempted, + * making a thread's VMID inactive. So we need to call + * update_vttbr in non-premptible context. + */ + update_vmid(&vcpu->arch.hw_mmu->vmid); + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -777,8 +722,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); - if (ret <= 0 || need_new_vmid_gen(&vcpu->arch.hw_mmu->vmid) || - kvm_request_pending(vcpu)) { + if (ret <= 0 || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); @@ -1460,6 +1404,8 @@ static void cpu_hyp_reset(void) { if (!is_kernel_in_hyp_mode()) __hyp_reset_vectors(); + + kvm_call_hyp(__kvm_tlb_flush_local_all); } /* @@ -1635,9 +1581,32 @@ static bool init_psci_relay(void) static int init_common_resources(void) { + struct asid_info *info = &vmid_info; + int err; + + /* + * Initialize the ASID allocator telling it to allocate a single + * VMID per VM. + */ + err = asid_allocator_init(info, kvm_get_vmid_bits(), false); + if (err) { + kvm_err("Failed to initialize VMID allocator.\n"); + return err; + } + + info->active = &active_vmids; + info->reserved = &reserved_vmids; + info->flush_cpu_ctxt_cb = vmid_flush_cpu_ctxt; + info->set_reserved_bits = vmid_set_reserved_bits; + return kvm_set_ipa_limit(); } +static void free_common_resources(void) +{ + asid_allocator_free(&vmid_info); +} + static int init_subsystems(void) { int err = 0; @@ -1918,7 +1887,7 @@ int kvm_arch_init(void *opaque) err = kvm_arm_init_sve(); if (err) - return err; + goto out_err; if (!in_hyp_mode) { err = init_hyp_mode(); @@ -1952,6 +1921,7 @@ int kvm_arch_init(void *opaque) if (!in_hyp_mode) teardown_hyp_mode(); out_err: + free_common_resources(); return err; } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 936328207bde..62027448d534 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -25,9 +25,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); } -static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt) +static void handle___kvm_tlb_flush_local_all(struct kvm_cpu_context *host_ctxt) { - __kvm_flush_vm_context(); + __kvm_tlb_flush_local_all(); } static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) @@ -112,7 +112,7 @@ typedef void (*hcall_t)(struct kvm_cpu_context *); static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), - HANDLE_FUNC(__kvm_flush_vm_context), + HANDLE_FUNC(__kvm_tlb_flush_local_all), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 229b06748c20..3f1fc5125e9e 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -138,10 +138,10 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) __tlb_switch_to_host(&cxt); } -void __kvm_flush_vm_context(void) +void __kvm_tlb_flush_local_all(void) { - dsb(ishst); - __tlbi(alle1is); + dsb(nshst); + __tlbi(alle1); /* * VIPT and PIPT caches are not affected by VMID, so no maintenance @@ -153,7 +153,7 @@ void __kvm_flush_vm_context(void) * */ if (icache_is_vpipt()) - asm volatile("ic ialluis"); + asm volatile("ic iallu" : : ); - dsb(ish); + dsb(nsh); } diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 66f17349f0c3..89f229e77b7d 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -142,10 +142,10 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) __tlb_switch_to_host(&cxt); } -void __kvm_flush_vm_context(void) +void __kvm_tlb_flush_local_all(void) { - dsb(ishst); - __tlbi(alle1is); + dsb(nshst); + __tlbi(alle1); /* * VIPT and PIPT caches are not affected by VMID, so no maintenance @@ -157,7 +157,7 @@ void __kvm_flush_vm_context(void) * */ if (icache_is_vpipt()) - asm volatile("ic ialluis"); + asm volatile("ic iallu" : : ); - dsb(ish); + dsb(nsh); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..4933fc9a13fb 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -390,7 +390,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) mmu->kvm = kvm; mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); - mmu->vmid.vmid_gen = 0; return 0; out_destroy_pgtable: