From patchwork Thu Jul 23 14:57:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11681169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1844D138A for ; Thu, 23 Jul 2020 15:08:38 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3E9420771 for ; Thu, 23 Jul 2020 15:08:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rlJy+12K"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="TIgqDcGP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3E9420771 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qXR+EErixUNuf5We0mMsxl2bXqzkDo/8kqK7i+KmDg4=; b=rlJy+12KbMV9PUwjWWrcIfvEv amvRT79MAWLmYBw2D3694pZt8Zns1mb1Zh5dpBVqaIDA1qTWfkKa4/U6W2SbkGygqN/0WAFHc9EEp 5ruZGVXg3RLBM2ktZ13BOwd0mdzHa5IUWdCZ1R9vMQb9eD9XuzmgN8AJkj0zQpr9jlcyObANbfPrQ G+VGSwEnaD0pWpQy3syaa5lrIi8QkHkqzRZeIyQ4lb2aZFcTHbpMKXYABKuQZTiB7JZ+jHAr4gENK LGj1K0XwlMB4DzJBK8jEOmz+5SE/S/bLIatO3jwIKBOB3xkQlAipERt0THeyHaLi3Tat1B/HpDdo4 Lw4+czBsg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jycnp-0002cR-DA; Thu, 23 Jul 2020 15:06:33 +0000 Received: from mail-ej1-x641.google.com ([2a00:1450:4864:20::641]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jycnA-0002M0-6j for linux-arm-kernel@lists.infradead.org; Thu, 23 Jul 2020 15:06:00 +0000 Received: by mail-ej1-x641.google.com with SMTP id n22so6752009ejy.3 for ; Thu, 23 Jul 2020 08:05:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kyLB1szUBQsUyuDZEpVs6lUMCzD0UrnQim8ogBM1DBc=; b=TIgqDcGPETOax2H76d+vZEBy1zdv63MybiopLlR5rP602xAa2N/xycNbrwcqXVpEnJ EWI44u9SK/S9yc+5y/zUAphoZCqGYVb3LQ3Jhd9IXv/3PlTQmNTdvczY9ldDPgWW/kVH SDPjqEhzW5Zzc+5SF+gjxjdVjcDuXaQi4W13y3wz55uBHZUxGOe1SnUE52hW8Q9eFxtR CImx+fPZ7oyZqudrfJcVdq6sQqPUySrFS7kP9OQHALQ2Cn4xFA7aVPgMe/2woBP/8iMO wzWKfxJblq4inkYojADj7cwnFD55gA6OP7Fz9VWUTrwCjMUyfJaifkazmKFBuHM0cazd XY4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kyLB1szUBQsUyuDZEpVs6lUMCzD0UrnQim8ogBM1DBc=; b=bLsUHh3u0LVY9mU4PCpU3iiEUfvHy8fHdfb2vuK8P8lq/506TK+Nvu4gUzJ9U8pqvQ Rrn3S81XncvippRGjJjfFSBWAWMV21GTlWPs8ia+EbE4Bh348OzP06QEzl8bHfa1XHeh QlJSjC0b306N0WN3+/75bctegk5C6CGvJZGKdkepyfixJNwJRDRCQ2eKJDom2PMEgdY2 JlzViLyDm2q851Yksiz7iSPc9+Ore/5CHJakGJ2janoGHoC8h/oS5E1/WbTFh9V7iOrH uQozp+m2eTFF60jv3Fdw5NgSU7MoL9ywNi/arwB6BBJq9P0uY3rkL/GUlGHJbGtOiH14 fj7g== X-Gm-Message-State: AOAM533xqymsvYZeJLU2Ru3xKaNwM/jwsaRJreIlgRl5ECbDcbMrBcu/ fMSOgIilm09rj9wbyLR/8hZNjg== X-Google-Smtp-Source: ABdhPJxT81WzrRL69fRaYXLwYWkurDucRWTnYTaHFYjHmtV2KeNSCodT01sypTXX787HWuZiSk9GWQ== X-Received: by 2002:a17:906:8601:: with SMTP id o1mr4957765ejx.326.1595516750851; Thu, 23 Jul 2020 08:05:50 -0700 (PDT) Received: from localhost.localdomain ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id k15sm2145952eji.49.2020.07.23.08.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jul 2020 08:05:50 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: [PATCH v9 04/13] arm64: mm: Pin down ASIDs for sharing mm with devices Date: Thu, 23 Jul 2020 16:57:16 +0200 Message-Id: <20200723145724.3014766-5-jean-philippe@linaro.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200723145724.3014766-1-jean-philippe@linaro.org> References: <20200723145724.3014766-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200723_110552_362950_5F5363B5 X-CRM114-Status: GOOD ( 30.62 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:641 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: fenghua.yu@intel.com, jacob.jun.pan@linux.intel.com, Jean-Philippe Brucker , catalin.marinas@arm.com, joro@8bytes.org, robin.murphy@arm.com, hch@infradead.org, zhengxiang9@huawei.com, Jonathan.Cameron@huawei.com, zhangfei.gao@linaro.org, will@kernel.org, xuzaibo@huawei.com, baolu.lu@linux.intel.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org To enable address space sharing with the IOMMU, introduce arm64_mm_context_get() and arm64_mm_context_put(), that pin down a context and ensure that it will keep its ASID after a rollover. Export the symbols to let the modular SMMUv3 driver use them. Pinning is necessary because a device constantly needs a valid ASID, unlike tasks that only require one when running. Without pinning, we would need to notify the IOMMU when we're about to use a new ASID for a task, and it would get complicated when a new task is assigned a shared ASID. Consider the following scenario with no ASID pinned: 1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1) 2. Task t2 is scheduled on CPUx, gets ASID (1, 2) 3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1) We would now have to immediately generate a new ASID for t1, notify the IOMMU, and finally enable task tn. We are holding the lock during all that time, since we can't afford having another CPU trigger a rollover. The IOMMU issues invalidation commands that can take tens of milliseconds. It gets needlessly complicated. All we wanted to do was schedule task tn, that has no business with the IOMMU. By letting the IOMMU pin tasks when needed, we avoid stalling the slow path, and let the pinning fail when we're out of shareable ASIDs. After a rollover, the allocator expects at least one ASID to be available in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS - 1) is the maximum number of ASIDs that can be shared with the IOMMU. Signed-off-by: Jean-Philippe Brucker --- v9: * Make mm->context.pinned a refcount_t. * Prepend exported symbols with arm64_. * Initialize pinned_asid_map with kernel ASID bits if necessary. * Allow pinned_asid_map to be NULL. It could also depend on CONFIG_ARM_SMMU_V3_SVA since it adds an overhead on rollover (memcpy() asid_map instead of memset()), but that can be changed later. * Only set the USER_ASID_BIT if kpti is enabled (previously it was set if CONFIG_UNMAP_KERNEL_AT_EL0). --- arch/arm64/include/asm/mmu.h | 3 + arch/arm64/include/asm/mmu_context.h | 11 ++- arch/arm64/mm/context.c | 105 +++++++++++++++++++++++++-- 3 files changed, 112 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 8444df000181..77b3bd6ddcfa 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -17,11 +17,14 @@ #ifndef __ASSEMBLY__ +#include + typedef struct { atomic64_t id; #ifdef CONFIG_COMPAT void *sigpage; #endif + refcount_t pinned; void *vdso; unsigned long flags; } mm_context_t; diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index b0bd9b55594c..0a10a3412c93 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) #define destroy_context(mm) do { } while(0) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) +static inline int +init_new_context(struct task_struct *tsk, struct mm_struct *mm) +{ + atomic64_set(&mm->context.id, 0); + refcount_set(&mm->context.pinned, 0); + return 0; +} #ifdef CONFIG_ARM64_SW_TTBR0_PAN static inline void update_saved_ttbr0(struct task_struct *tsk, @@ -250,6 +256,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); +unsigned long arm64_mm_context_get(struct mm_struct *mm); +void arm64_mm_context_put(struct mm_struct *mm); + #endif /* !__ASSEMBLY__ */ #endif /* !__ASM_MMU_CONTEXT_H */ diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index d702d60e64da..5c06973b15db 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,6 +27,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; +static unsigned long max_pinned_asids; +static unsigned long nr_pinned_asids; +static unsigned long *pinned_asid_map; + #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) @@ -72,7 +76,7 @@ void verify_cpu_asid_bits(void) } } -static void set_kpti_asid_bits(void) +static void set_kpti_asid_bits(unsigned long *map) { unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long); /* @@ -81,13 +85,15 @@ static void set_kpti_asid_bits(void) * is set, then the ASID will map only userspace. Thus * mark even as reserved for kernel. */ - memset(asid_map, 0xaa, len); + memset(map, 0xaa, len); } static void set_reserved_asid_bits(void) { - if (arm64_kernel_unmapped_at_el0()) - set_kpti_asid_bits(); + if (pinned_asid_map) + bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS); + else if (arm64_kernel_unmapped_at_el0()) + set_kpti_asid_bits(asid_map); else bitmap_clear(asid_map, 0, NUM_USER_ASIDS); } @@ -165,6 +171,14 @@ static u64 new_context(struct mm_struct *mm) if (check_update_reserved_asid(asid, newasid)) return newasid; + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_asids. + */ + if (refcount_read(&mm->context.pinned)) + return newasid; + /* * We had a valid ASID in a previous life, so try to re-use * it if possible. @@ -254,6 +268,71 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) cpu_switch_mm(mm->pgd, mm); } +unsigned long arm64_mm_context_get(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid; + + if (!pinned_asid_map) + return 0; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + asid = atomic64_read(&mm->context.id); + + if (refcount_inc_not_zero(&mm->context.pinned)) + goto out_unlock; + + if (nr_pinned_asids >= max_pinned_asids) { + asid = 0; + goto out_unlock; + } + + if (!asid_gen_match(asid)) { + /* + * We went through one or more rollover since that ASID was + * used. Ensure that it is still valid, or generate a new one. + */ + asid = new_context(mm); + atomic64_set(&mm->context.id, asid); + } + + nr_pinned_asids++; + __set_bit(asid2idx(asid), pinned_asid_map); + refcount_set(&mm->context.pinned, 1); + +out_unlock: + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + + asid &= ~ASID_MASK; + + /* Set the equivalent of USER_ASID_BIT */ + if (asid && arm64_kernel_unmapped_at_el0()) + asid |= 1; + + return asid; +} +EXPORT_SYMBOL_GPL(arm64_mm_context_get); + +void arm64_mm_context_put(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid = atomic64_read(&mm->context.id); + + if (!pinned_asid_map) + return; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + if (refcount_dec_and_test(&mm->context.pinned)) { + __clear_bit(asid2idx(asid), pinned_asid_map); + nr_pinned_asids--; + } + + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); +} +EXPORT_SYMBOL_GPL(arm64_mm_context_put); + /* Errata workaround post TTBRx_EL1 update. */ asmlinkage void post_ttbr_update_workaround(void) { @@ -294,8 +373,11 @@ static int asids_update_limit(void) { unsigned long num_available_asids = NUM_USER_ASIDS; - if (arm64_kernel_unmapped_at_el0()) + if (arm64_kernel_unmapped_at_el0()) { num_available_asids /= 2; + if (pinned_asid_map) + set_kpti_asid_bits(pinned_asid_map); + } /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. @@ -303,6 +385,13 @@ static int asids_update_limit(void) WARN_ON(num_available_asids - 1 <= num_possible_cpus()); pr_info("ASID allocator initialised with %lu entries\n", num_available_asids); + + /* + * There must always be an ASID available after rollover. Ensure that, + * even if all CPUs have a reserved ASID and the maximum number of ASIDs + * are pinned, there still is at least one empty slot in the ASID map. + */ + max_pinned_asids = num_available_asids - num_possible_cpus() - 2; return 0; } arch_initcall(asids_update_limit); @@ -317,13 +406,17 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), + sizeof(*pinned_asid_map), GFP_KERNEL); + nr_pinned_asids = 0; + /* * We cannot call set_reserved_asid_bits() here because CPU * caps are not finalized yet, so it is safer to assume KPTI * and reserve kernel ASID's from beginning. */ if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) - set_kpti_asid_bits(); + set_kpti_asid_bits(asid_map); return 0; } early_initcall(asids_init);