From patchwork Tue Dec 5 17:38:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10093501 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2DD0360348 for ; Tue, 5 Dec 2017 17:39:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 19D66292DA for ; Tue, 5 Dec 2017 17:39:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C1B02950D; Tue, 5 Dec 2017 17:39:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4F596292DA for ; Tue, 5 Dec 2017 17:39:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=nBabWKxJo0CNeCZtupuXLe4hCqIPTM45PReXTTRZpug=; b=Qu/ xARs7dHbY7IENKIoaZs4y1F5j5xWQWXlzTqeXqC/QPNGqhLRR5NUTFg4YGvC2UEzzKtH3iuOVSqc5 YFaIameYoYYmIPiKlVH4euy6ISP83Ja8RU1FVGxVLdQAZAlx3rxLM5PmS3TeWK6ZpNXVtt8h8nzbd nGk6oA7JA9u5Mm58rsJMJLrP8aATs1zgYU+IMt2zWk/p7rA+Ge0AgizBpzZU4HXnm05+eUYbZakYO iw0eNSbOz68nzzbAuqTbYciTOG7dA/7i0uFnp3zhOWfisP4W0rTrAb8/kKUFZ8w5x50wV91PRszTh 19/dylfpw/uVwToubsAFq+FUo9v2zMA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMHBP-0002BI-0U; Tue, 05 Dec 2017 17:39:03 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMHBL-0002Ac-Pw for linux-arm-kernel@lists.infradead.org; Tue, 05 Dec 2017 17:39:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07F1A1435; Tue, 5 Dec 2017 09:38:35 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C8F8C3F246; Tue, 5 Dec 2017 09:38:33 -0800 (PST) From: Julien Thierry To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] arm64/mm: Do not write ASID generation to ttbr0 Date: Tue, 5 Dec 2017 17:38:27 +0000 Message-Id: <1512495507-23259-1-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171205_093859_867707_5B601982 X-CRM114-Status: GOOD ( 13.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , Vladimir Murzin , Will Deacon , James Morse , Julien Thierry MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When writing the user ASID to ttbr0, 16bit get copied to ttbr0, potentially including part of the ASID generation in the ttbr0.ASID. If the kernel is using less than 16bits ASIDs and the other ttbr0 bits aren't RES0, two tasks using the same mm context might end up running with different ttbr0.ASID values. This would be triggered by one of the threads being scheduled before a roll-over and the second one scheduled after a roll-over. Pad the generation out of the 16bits of the mm id that are written to ttbr0. Thus, what the hardware sees is what the kernel considers ASID. Signed-off-by: Julien Thierry Reported-by: Will Deacon Cc: Will Deacon Cc: Catalin Marinas Cc: James Morse Cc: Vladimir Murzin --- arch/arm64/include/asm/mmu.h | 8 +++++++- arch/arm64/mm/context.c | 21 +++++++++++++++++++-- arch/arm64/mm/proc.S | 3 ++- 3 files changed, 28 insertions(+), 4 deletions(-) -- 1.9.1 diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0..61e5436 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -16,6 +16,10 @@ #ifndef __ASM_MMU_H #define __ASM_MMU_H +#define ASID_MAX_BITS 16 + +#ifndef __ASSEMBLY__ + #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */ typedef struct { @@ -29,7 +33,8 @@ * ASID change and therefore doesn't need to reload the counter using * atomic64_read. */ -#define ASID(mm) ((mm)->context.id.counter & 0xffff) +#define ASID(mm) \ + ((mm)->context.id.counter & GENMASK(ASID_MAX_BITS - 1, 0)) extern void paging_init(void); extern void bootmem_init(void); @@ -41,4 +46,5 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void *fixmap_remap_fdt(phys_addr_t dt_phys); extern void mark_linear_text_alias_ro(void); +#endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 6f40170..a7c72d4 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -37,9 +37,9 @@ static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; +#define ASID_FIRST_VERSION (1UL << ASID_MAX_BITS) #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION +#define NUM_USER_ASIDS (1UL << asid_bits) /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -60,6 +60,8 @@ static u32 get_cpu_asid_bits(void) asid = 16; } + WARN_ON(asid > ASID_MAX_BITS); + return asid; } @@ -142,6 +144,14 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } +/* + * Format of ASID is: + * - bits .. 0 -> actual ASID + * - bits 63..16 -> ASID generation + * + * Generation is padded to the maximum supported ASID size + * to avoid it being taken into account in the ttbr. + */ static u64 new_context(struct mm_struct *mm, unsigned int cpu) { static u32 cur_idx = 1; @@ -180,6 +190,13 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) /* We're out of ASIDs, so increment the global generation count */ generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, &asid_generation); + + /* + * It is unlikely the generation will ever overflow, but if this + * happens, let it be known strange things can occur. + */ + WARN_ON(generation == ASID_FIRST_VERSION); + flush_context(cpu); /* We have more ASIDs than CPUs, so this will always succeed */ diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 95233df..33c7f13 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -140,7 +141,7 @@ ENDPROC(cpu_do_resume) ENTRY(cpu_do_switch_mm) pre_ttbr0_update_workaround x0, x2, x3 mmid x1, x1 // get mm->context.id - bfi x0, x1, #48, #16 // set the ASID + bfi x0, x1, #48, #ASID_MAX_BITS // set the ASID msr ttbr0_el1, x0 // set TTBR0 isb post_ttbr0_update_workaround