From patchwork Wed Jan 11 14:41:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christopher Covington X-Patchwork-Id: 9510205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F3C256075C for ; Wed, 11 Jan 2017 14:42:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E935528577 for ; Wed, 11 Jan 2017 14:42:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DE1D1285EB; Wed, 11 Jan 2017 14:42:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3D3EE2857D for ; Wed, 11 Jan 2017 14:42:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cRK73-000490-Mg; Wed, 11 Jan 2017 14:42:53 +0000 Received: from smtp.codeaurora.org ([198.145.29.96]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cRK6B-0002X0-7k for linux-arm-kernel@lists.infradead.org; Wed, 11 Jan 2017 14:42:02 +0000 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 8BA6D6146F; Wed, 11 Jan 2017 14:41:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1484145699; bh=IZtfH1S1OC7Vrsdd0Kn6mbYwi2xBhsJAuKoYg7fFJ+Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PA4DhEUiAb2v0MMrrxJt5MTGBBrqWlIwWq3+WSrzE/gymLohy9RUv1oe/iAnfSDcM z1cP+OBdwXp+8yZeAPihWOu1iYjhoQeVf5R8Fv4+xZPF1XfbN42QLwWYREMOKjssld ys0KR0EOgR8FdhW93wp08gyRBP/KwiydpO4b5WoQ= Received: from illium.qualcomm.com (global_nat1_iad_fw.qualcomm.com [129.46.232.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: cov@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2B9F660F94; Wed, 11 Jan 2017 14:41:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1484145698; bh=IZtfH1S1OC7Vrsdd0Kn6mbYwi2xBhsJAuKoYg7fFJ+Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RELeceUekdahdd7NSRrAJ3ckHI+S8BxP6a4Ri4FhQudX7lIYw2abwoDKMq3oKfQyv 3FVPO7xdrHxf2SJu0rLMlkJclOZWi/7yIimgslPB6xeP8jNaRjWmKg3+BwFO5KYx81 +noeIaEigkiATmNZnsLJZFnP0bSvF0veP6e90IHY= DMARC-Filter: OpenDMARC Filter v1.3.1 smtp.codeaurora.org 2B9F660F94 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=pass smtp.mailfrom=cov@codeaurora.org From: Christopher Covington To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, shankerd@codeaurora.org, timur@codeaurora.org Subject: [PATCH v3 3/5] arm64: Create and use __tlbi_dsb() macros Date: Wed, 11 Jan 2017 09:41:16 -0500 Message-Id: <20170111144118.17062-3-cov@codeaurora.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170111144118.17062-1-cov@codeaurora.org> References: <20170111144118.17062-1-cov@codeaurora.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170111_064159_446083_F697DC11 X-CRM114-Status: GOOD ( 14.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jon Masters , Mark Langsdorf , Christopher Covington , Mark Salter MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This refactoring will allow an errata workaround that repeats tlbi dsb sequences to only change one location. This is not intended to change the generated assembly and comparison of before and after preprocessor output of arch/arm64/mm/mmu.c and vmlinux objdump shows no functional changes. Signed-off-by: Christopher Covington --- arch/arm64/include/asm/tlbflush.h | 104 +++++++++++++++++++++++++------------- 1 file changed, 69 insertions(+), 35 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index deab523..f28813c 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -25,22 +25,69 @@ #include /* - * Raw TLBI operations. + * Raw TLBI, DSB operations * - * Where necessary, use the __tlbi() macro to avoid asm() - * boilerplate. Drivers and most kernel code should use the TLB - * management routines in preference to the macro below. + * Where necessary, use __tlbi_*dsb() macros to avoid asm() boilerplate. + * Drivers and most kernel code should use the TLB management routines in + * preference to the macros below. * - * The macro can be used as __tlbi(op) or __tlbi(op, arg), depending - * on whether a particular TLBI operation takes an argument or - * not. The macros handles invoking the asm with or without the - * register argument as appropriate. + * The __tlbi_dsb() macro handles invoking the asm without any register + * argument, with a single register argument, and with start (included) + * and end (excluded) range of register arguments. For example: + * + * __tlbi_dsb(op, attr) + * + * tlbi op + * dsb attr + * + * __tlbi_dsb(op, attr, addr) + * + * mov %[addr], =addr + * tlbi op, %[addr] + * dsb attr + * + * __tlbi_range_dsb(op, attr, start, end) + * + * mov %[arg], =start + * mov %[end], =end + * for: + * tlbi op, %[addr] + * add %[addr], %[addr], #(1 << (PAGE_SHIFT - 12)) + * cmp %[addr], %[end] + * b.ne for + * dsb attr */ -#define __TLBI_0(op, arg) asm ("tlbi " #op) -#define __TLBI_1(op, arg) asm ("tlbi " #op ", %0" : : "r" (arg)) -#define __TLBI_N(op, arg, n, ...) __TLBI_##n(op, arg) -#define __tlbi(op, ...) __TLBI_N(op, ##__VA_ARGS__, 1, 0) +#define __TLBI_FOR_0(ig0, ig1, ig2) +#define __TLBI_INSTR_0(op, ig1, ig2) "tlbi " #op +#define __TLBI_IO_0(ig0, ig1, ig2) : : + +#define __TLBI_FOR_1(ig0, ig1, ig2) +#define __TLBI_INSTR_1(op, ig0, ig1) "tlbi " #op ", %0" +#define __TLBI_IO_1(ig0, arg, ig1) : : "r" (arg) + +#define __TLBI_FOR_2(ig0, start, ig1) unsigned long addr; \ + for (addr = start; addr < end; \ + addr += 1 << (PAGE_SHIFT - 12)) +#define __TLBI_INSTR_2(op, ig0, ig1) "tlbi " #op ", %0" +#define __TLBI_IO_2(ig0, ig1, ig2) : : "r" (addr) + +#define __TLBI_FOR_N(op, a1, a2, n, ...) __TLBI_FOR_##n(op, a1, a2) +#define __TLBI_INSTR_N(op, a1, a2, n, ...) __TLBI_INSTR_##n(op, a1, a2) +#define __TLBI_IO_N(op, a1, a2, n, ...) __TLBI_IO_##n(op, a1, a2) + +#define __TLBI_FOR(op, ...) __TLBI_FOR_N(op, ##__VA_ARGS__, 2, 1, 0) +#define __TLBI_INSTR(op, ...) __TLBI_INSTR_N(op, ##__VA_ARGS__, 2, 1, 0) +#define __TLBI_IO(op, ...) __TLBI_IO_N(op, ##__VA_ARGS__, 2, 1, 0) + +#define __tlbi_asm_dsb(as, op, attr, ...) do { \ + __TLBI_FOR(op, ##__VA_ARGS__) \ + asm (__TLBI_INSTR(op, ##__VA_ARGS__) \ + __TLBI_IO(op, ##__VA_ARGS__)); \ + asm volatile ( as "\ndsb " #attr "\n" \ + : : : "memory"); } while (0) + +#define __tlbi_dsb(...) __tlbi_asm_dsb("", ##__VA_ARGS__) /* * TLB Management @@ -84,16 +131,14 @@ static inline void local_flush_tlb_all(void) { dsb(nshst); - __tlbi(vmalle1); - dsb(nsh); + __tlbi_dsb(vmalle1, nsh); isb(); } static inline void flush_tlb_all(void) { dsb(ishst); - __tlbi(vmalle1is); - dsb(ish); + __tlbi_dsb(vmalle1is, ish); isb(); } @@ -102,8 +147,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) unsigned long asid = ASID(mm) << 48; dsb(ishst); - __tlbi(aside1is, asid); - dsb(ish); + __tlbi_dsb(aside1is, ish, asid); } static inline void flush_tlb_page(struct vm_area_struct *vma, @@ -112,8 +156,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr = uaddr >> 12 | (ASID(vma->vm_mm) << 48); dsb(ishst); - __tlbi(vale1is, addr); - dsb(ish); + __tlbi_dsb(vale1is, ish, addr); } /* @@ -127,7 +170,6 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, bool last_level) { unsigned long asid = ASID(vma->vm_mm) << 48; - unsigned long addr; if ((end - start) > MAX_TLB_RANGE) { flush_tlb_mm(vma->vm_mm); @@ -138,13 +180,10 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, end = asid | (end >> 12); dsb(ishst); - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) { - if (last_level) - __tlbi(vale1is, addr); - else - __tlbi(vae1is, addr); - } - dsb(ish); + if (last_level) + __tlbi_dsb(vale1is, ish, start, end); + else + __tlbi_dsb(vae1is, ish, start, end); } static inline void flush_tlb_range(struct vm_area_struct *vma, @@ -155,8 +194,6 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { flush_tlb_all(); return; @@ -166,9 +203,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end end >>= 12; dsb(ishst); - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) - __tlbi(vaae1is, addr); - dsb(ish); + __tlbi_dsb(vaae1is, ish, start, end); isb(); } @@ -181,8 +216,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm, { unsigned long addr = uaddr >> 12 | (ASID(mm) << 48); - __tlbi(vae1is, addr); - dsb(ish); + __tlbi_dsb(vae1is, ish, addr); } #endif