From patchwork Fri Oct 18 18:16:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199357 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30509112B for ; Fri, 18 Oct 2019 18:17:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06E6A21835 for ; Fri, 18 Oct 2019 18:17:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="mhKliMfV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06E6A21835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LSwWEDN0KPjmNRKaGcNUZC1/2dAp1Z02Oe25TYMwQmw=; b=mhKliMfVHZoG4F NR6Ff12j7OGINpdRJidJ9598oBdsSuzhBA8fdpY+osbH2hPJFEIbH0BF1l4Yghe1rh7XuvpEyCrVh zGRpWe3eMbg1Iq5fS1+3JyHHT2POixs5WXGJ9YMUc9DXKzcSUxUa6kZ9wje8qcc7ZA2lazPp1F1es I5abo3ClvWJEWLfJ2599fgxeD7C0QQ6Eh/qwbaFuXS8jMPSGkhCt6Ov7jh6O/JnAlF7TM/u2OVKtk znb5Tc3OPFr3/bHO5qEBKdds+D6/3XSz/+KxNWdMH0nMkVE0ES+pjOb6V9QW++xyOMWpWVWvh+pZs 6Q+4uvAp+hP+3uVBOutA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWo8-0007Uh-Gm; Fri, 18 Oct 2019 18:17:00 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWo5-0007Rk-Ji for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:16:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DBC091597; Fri, 18 Oct 2019 11:16:48 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1AFE03F718; Fri, 18 Oct 2019 11:16:47 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 1/8] arm64: Allow passing fault address to fixup handlers Date: Fri, 18 Oct 2019 19:16:35 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111657_745524_A18CC68E X-CRM114-Status: GOOD ( 19.30 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Extend fixup_exception() to optionally place the faulting address in a register when returning to a fixup handler. Since A64 instructions must be 4-byte-aligned, we can mimic the IA-64 implementation and encode a flag in the lower bits of the offset field to indicate handlers which expect an address. This will allow us to use more efficient offset addressing modes in usercopy routines, rather than updating the base register on every access just for the sake of inferring where a fault occurred in order to compute the return value upon failure. The choice of x15 is somewhat arbitrary, but with the consideration that as the highest-numbered temporary register with no possible 'special' role in the ABI, it is most likely not used by hand-written assembly code, and thus a minimally-invasive option for imported routines. Signed-off-by: Sam Tebbs [ rm: split into separate patch, use UL(), expand commit message ] Signed-off-by: Robin Murphy --- arch/arm64/include/asm/assembler.h | 9 +++++++++ arch/arm64/include/asm/extable.h | 10 +++++++++- arch/arm64/mm/extable.c | 13 +++++++++---- arch/arm64/mm/fault.c | 2 +- 4 files changed, 28 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index b8cf7c85ffa2..02bb156cbf0e 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -142,6 +143,14 @@ alternative_endif .popsection .endm +/* + * Emit an entry into the exception table. + * The fixup handler will receive the faulting address in x15 + */ + .macro _asm_extable_faultaddr, from, to + _asm_extable \from, \to + FIXUP_WITH_ADDR + .endm + #define USER(l, x...) \ 9999: x; \ _asm_extable 9999b, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 56a4f68b262e..4c4955f2bb44 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -2,6 +2,12 @@ #ifndef __ASM_EXTABLE_H #define __ASM_EXTABLE_H +#include + +#define FIXUP_WITH_ADDR UL(1) + +#ifndef __ASSEMBLY__ + /* * The exception table consists of pairs of relative offsets: the first * is the relative offset to an instruction that is allowed to fault, @@ -22,5 +28,7 @@ struct exception_table_entry #define ARCH_HAS_RELATIVE_EXTABLE -extern int fixup_exception(struct pt_regs *regs); +extern int fixup_exception(struct pt_regs *regs, unsigned long addr); + +#endif #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 81e694af5f8c..e6578c2814b5 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -6,13 +6,18 @@ #include #include -int fixup_exception(struct pt_regs *regs) +int fixup_exception(struct pt_regs *regs, unsigned long addr) { const struct exception_table_entry *fixup; fixup = search_exception_tables(instruction_pointer(regs)); - if (fixup) - regs->pc = (unsigned long)&fixup->fixup + fixup->fixup; - + if (fixup) { + unsigned long offset = fixup->fixup; + if (offset & FIXUP_WITH_ADDR) { + regs->regs[15] = addr; + offset &= ~FIXUP_WITH_ADDR; + } + regs->pc = (unsigned long)&fixup->fixup + offset; + } return fixup != NULL; } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 855f2a7954e6..5272e9377858 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -304,7 +304,7 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, * Are we prepared to handle this kernel fault? * We are almost certainly not prepared to handle instruction faults. */ - if (!is_el1_instruction_abort(esr) && fixup_exception(regs)) + if (!is_el1_instruction_abort(esr) && fixup_exception(regs, addr)) return; if (WARN_RATELIMIT(is_spurious_el1_translation_fault(addr, esr, regs), From patchwork Fri Oct 18 18:16:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199365 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99EC4112B for ; Fri, 18 Oct 2019 18:18:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7020320820 for ; Fri, 18 Oct 2019 18:18:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="HZjJ0yXQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7020320820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=elMn+nLvUnk28o56jEKxpnCgFyzTLSnBOCW1AyhPJ/Q=; b=HZjJ0yXQX0WfZ9 JZHnXD+MomoC3pdBg7GNfYNYXb3lCRl5LmYaMwXPx9Gw/jOOp1rQ1nQ/Toj+OUsOTjO/B3a9f98JH BXrMHtfqF7LThGpQNyD7nuYF56ia5Y847Pl9t83Gq0Fvec7vypgaouc8N0JoJ/CwF73vlR6EJziXb LfCamKtiAw084LlPlKxs13xAH8ynOETi4NuEZ/4kVilVyKELgBisN8WRfpHsN+Mqw25CT7S8ss/ES kwZhznXhnvbt4oIK+cD2mAL5ZL7zODAvK7CT1Uhmzb2M+DUrFevYUxMKIDLIoeieU1EeV1Hf40ZdO JDQFJSPb0UZrcu3Hi8XQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWp9-0008NY-0A; Fri, 18 Oct 2019 18:18:03 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWo5-0007Ry-Jj for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 053E715A1; Fri, 18 Oct 2019 11:16:50 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1D48A3F718; Fri, 18 Oct 2019 11:16:49 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 2/8] arm64: Import latest Cortex Strings memcpy implementation Date: Fri, 18 Oct 2019 19:16:36 +0100 Message-Id: <78649f677030c325afc323b622087c0beba53ca6.1571421836.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111657_753363_EEDEB025 X-CRM114-Status: GOOD ( 17.81 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import the latest Cortex Strings memcpy implementation into memcpy, copy_{from, to and in}_user. The implementation of the user routines is separated into two forms: one for when UAO is enabled and one for when UAO is disabled, with the two being chosen between with a runtime patch. This avoids executing the many NOPs emitted when UAO is disabled. The upstream source is src/aarch64/memcpy.S as of commit 9e048b995da4 in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: add UAO fixups, streamline copy_exit paths, expand commit message ] Signed-off-by: Robin Murphy --- arch/arm64/include/asm/alternative.h | 36 ---- arch/arm64/lib/copy_from_user.S | 115 +++++++--- arch/arm64/lib/copy_in_user.S | 130 +++++++++--- arch/arm64/lib/copy_template.S | 304 +++++++++++++-------------- arch/arm64/lib/copy_template_user.S | 24 +++ arch/arm64/lib/copy_to_user.S | 113 +++++++--- arch/arm64/lib/copy_user_fixup.S | 14 ++ arch/arm64/lib/memcpy.S | 48 +++-- 8 files changed, 495 insertions(+), 289 deletions(-) create mode 100644 arch/arm64/lib/copy_template_user.S create mode 100644 arch/arm64/lib/copy_user_fixup.S diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index b9f8d787eea9..01f19f3cb46a 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -220,36 +220,6 @@ alternative_endif * unprivileged instructions, and USER() only works for single instructions. */ #ifdef CONFIG_ARM64_UAO - .macro uao_ldp l, reg1, reg2, addr, post_inc - alternative_if_not ARM64_HAS_UAO -8888: ldp \reg1, \reg2, [\addr], \post_inc; -8889: nop; - nop; - alternative_else - ldtr \reg1, [\addr]; - ldtr \reg2, [\addr, #8]; - add \addr, \addr, \post_inc; - alternative_endif - - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; - .endm - - .macro uao_stp l, reg1, reg2, addr, post_inc - alternative_if_not ARM64_HAS_UAO -8888: stp \reg1, \reg2, [\addr], \post_inc; -8889: nop; - nop; - alternative_else - sttr \reg1, [\addr]; - sttr \reg2, [\addr, #8]; - add \addr, \addr, \post_inc; - alternative_endif - - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; - .endm - .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc alternative_if_not ARM64_HAS_UAO 8888: \inst \reg, [\addr], \post_inc; @@ -262,12 +232,6 @@ alternative_endif _asm_extable 8888b,\l; .endm #else - .macro uao_ldp l, reg1, reg2, addr, post_inc - USER(\l, ldp \reg1, \reg2, [\addr], \post_inc) - .endm - .macro uao_stp l, reg1, reg2, addr, post_inc - USER(\l, stp \reg1, \reg2, [\addr], \post_inc) - .endm .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc USER(\l, \inst \reg, [\addr], \post_inc) .endm diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 680e74409ff9..8928c38d8c76 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -20,51 +20,114 @@ * x0 - bytes not copied */ - .macro ldrb1 ptr, regB, val - uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val + .macro ldrb1 reg, ptr, offset=0 + 8888: ldtrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro strb1 ptr, regB, val - strb \ptr, [\regB], \val + .macro strb1 reg, ptr, offset=0 + strb \reg, [\ptr, \offset] .endm - .macro ldrh1 ptr, regB, val - uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val + .macro ldrb1_reg reg, ptr, offset + add \ptr, \ptr, \offset + 8888: ldtrb \reg, [\ptr] + sub \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; .endm - .macro strh1 ptr, regB, val - strh \ptr, [\regB], \val + .macro strb1_reg reg, ptr, offset + strb \reg, [\ptr, \offset] .endm - .macro ldr1 ptr, regB, val - uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val + .macro ldr1 reg, ptr, offset=0 + 8888: ldtr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro str1 ptr, regB, val - str \ptr, [\regB], \val + .macro str1 reg, ptr, offset=0 + str \reg, [\ptr, \offset] .endm - .macro ldp1 ptr, regB, regC, val - uao_ldp 9998f, \ptr, \regB, \regC, \val + .macro ldp1 regA, regB, ptr, offset=0 + 8888: ldtr \regA, [\ptr, \offset] + 8889: ldtr \regB, [\ptr, \offset + 8] + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; .endm - .macro stp1 ptr, regB, regC, val - stp \ptr, \regB, [\regC], \val + .macro stp1 regA, regB, ptr, offset=0 + stp \regA, \regB, [\ptr, \offset] + .endm + + .macro ldp1_pre regA, regB, ptr, offset + 8888: ldtr \regA, [\ptr, \offset] + 8889: ldtr \regB, [\ptr, \offset + 8] + add \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; + .endm + + .macro stp1_pre regA, regB, ptr, offset + stp \regA, \regB, [\ptr, \offset]! + .endm + + .macro ldrb1_nuao reg, ptr, offset=0 + 8888: ldrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro strb1_nuao reg, ptr, offset=0 + strb \reg, [\ptr, \offset] + .endm + + .macro ldrb1_nuao_reg reg, ptr, offset=0 + 8888: ldrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro strb1_nuao_reg reg, ptr, offset=0 + strb \reg, [\ptr, \offset] + .endm + + .macro ldr1_nuao reg, ptr, offset=0 + 8888: ldr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro str1_nuao reg, ptr, offset=0 + str \reg, [\ptr, \offset] + .endm + + .macro ldp1_nuao regA, regB, ptr, offset=0 + 8888: ldp \regA, \regB, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro stp1_nuao regA, regB, ptr, offset=0 + stp \regA, \regB, [\ptr, \offset] + .endm + + .macro ldp1_pre_nuao regA, regB, ptr, offset + 8888: ldp \regA, \regB, [\ptr, \offset]! + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro stp1_pre_nuao regA, regB, ptr, offset + stp \regA, \regB, [\ptr, \offset]! + .endm + + .macro copy_exit + b .Luaccess_finish .endm -end .req x5 ENTRY(__arch_copy_from_user) uaccess_enable_not_uao x3, x4, x5 - add end, x0, x2 -#include "copy_template.S" +#include "copy_template_user.S" +.Luaccess_finish: uaccess_disable_not_uao x3, x4 - mov x0, #0 // Nothing to copy + mov x0, #0 ret ENDPROC(__arch_copy_from_user) EXPORT_SYMBOL(__arch_copy_from_user) - - .section .fixup,"ax" - .align 2 -9998: sub x0, end, dst // bytes not copied - ret - .previous +#include "copy_user_fixup.S" diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S index 0bedae3f3792..2aecdc300c8d 100644 --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -21,52 +21,132 @@ * Returns: * x0 - bytes not copied */ - .macro ldrb1 ptr, regB, val - uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val + + .macro ldrb1 reg, ptr, offset=0 + 8888: ldtrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro strb1 ptr, regB, val - uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val + .macro strb1 reg, ptr, offset=0 + 8888: sttrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldrh1 ptr, regB, val - uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val + .macro ldrb1_reg reg, ptr, offset + add \ptr, \ptr, \offset + 8888: ldtrb \reg, [\ptr] + sub \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; .endm - .macro strh1 ptr, regB, val - uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val + .macro strb1_reg reg, ptr, offset + add \ptr, \ptr, \offset + 8888: sttrb \reg, [\ptr] + sub \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldr1 ptr, regB, val - uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val + .macro ldr1 reg, ptr, offset=0 + 8888: ldtr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro str1 ptr, regB, val - uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val + .macro str1 reg, ptr, offset=0 + 8888: sttr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldp1 ptr, regB, regC, val - uao_ldp 9998f, \ptr, \regB, \regC, \val + .macro ldp1 regA, regB, ptr, offset=0 + 8888: ldtr \regA, [\ptr, \offset] + 8889: ldtr \regB, [\ptr, \offset + 8] + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; .endm - .macro stp1 ptr, regB, regC, val - uao_stp 9998f, \ptr, \regB, \regC, \val + .macro stp1 regA, regB, ptr, offset=0 + 8888: sttr \regA, [\ptr, \offset] + 8889: sttr \regB, [\ptr, \offset + 8] + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; .endm -end .req x5 + .macro ldp1_pre regA, regB, ptr, offset + 8888: ldtr \regA, [\ptr, \offset] + 8889: ldtr \regB, [\ptr, \offset + 8] + add \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; + .endm + + .macro stp1_pre regA, regB, ptr, offset + 8888: sttr \regA, [\ptr, \offset] + 8889: sttr \regB, [\ptr, \offset + 8] + add \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; + .endm + + .macro ldrb1_nuao reg, ptr, offset=0 + 8888: ldrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro strb1_nuao reg, ptr, offset=0 + 8888: strb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldrb1_nuao_reg reg, ptr, offset=0 + 8888: ldrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro strb1_nuao_reg reg, ptr, offset=0 + 8888: strb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldr1_nuao reg, ptr, offset=0 + 8888: ldr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro str1_nuao reg, ptr, offset=0 + 8888: str \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldp1_nuao regA, regB, ptr, offset=0 + 8888: ldp \regA, \regB, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro stp1_nuao regA, regB, ptr, offset=0 + 8888: stp \regA, \regB, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldp1_pre_nuao regA, regB, ptr, offset + 8888: ldp \regA, \regB, [\ptr, \offset]! + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro stp1_pre_nuao regA, regB, ptr, offset + 8888: stp \regA, \regB, [\ptr, \offset]! + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro copy_exit + b .Luaccess_finish + .endm ENTRY(__arch_copy_in_user) uaccess_enable_not_uao x3, x4, x5 - add end, x0, x2 -#include "copy_template.S" +#include "copy_template_user.S" +.Luaccess_finish: uaccess_disable_not_uao x3, x4 mov x0, #0 ret ENDPROC(__arch_copy_in_user) EXPORT_SYMBOL(__arch_copy_in_user) - - .section .fixup,"ax" - .align 2 -9998: sub x0, end, dst // bytes not copied - ret - .previous +#include "copy_user_fixup.S" diff --git a/arch/arm64/lib/copy_template.S b/arch/arm64/lib/copy_template.S index 488df234c49a..c03694a6a342 100644 --- a/arch/arm64/lib/copy_template.S +++ b/arch/arm64/lib/copy_template.S @@ -1,13 +1,12 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2012 Linaro Limited. All rights reserved. + * Copyright (c) 2015 ARM Ltd. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ @@ -21,161 +20,146 @@ * Returns: * x0 - dest */ -dstin .req x0 -src .req x1 -count .req x2 -tmp1 .req x3 -tmp1w .req w3 -tmp2 .req x4 -tmp2w .req w4 -dst .req x6 + #define dstin x0 + #define src x1 + #define count x2 + #define dst x3 + #define srcend x4 + #define dstend x5 + #define A_l x6 + #define A_lw w6 + #define A_h x7 + #define A_hw w7 + #define B_l x8 + #define B_lw w8 + #define B_h x9 + #define C_l x10 + #define C_h x11 + #define D_l x12 + #define D_h x13 + #define E_l src + #define E_h count + #define F_l srcend + #define F_h dst + #define tmp1 x9 -A_l .req x7 -A_h .req x8 -B_l .req x9 -B_h .req x10 -C_l .req x11 -C_h .req x12 -D_l .req x13 -D_h .req x14 + prfm PLDL1KEEP, [src] + add srcend, src, count + add dstend, dstin, count + cmp count, 16 + b.ls L(copy16) + cmp count, 96 + b.hi L(copy_long) - mov dst, dstin - cmp count, #16 - /*When memory length is less than 16, the accessed are not aligned.*/ - b.lo .Ltiny15 - - neg tmp2, src - ands tmp2, tmp2, #15/* Bytes to reach alignment. */ - b.eq .LSrcAligned - sub count, count, tmp2 - /* - * Copy the leading memory data from src to dst in an increasing - * address order.By this way,the risk of overwriting the source - * memory data is eliminated when the distance between src and - * dst is less than 16. The memory accesses here are alignment. - */ - tbz tmp2, #0, 1f - ldrb1 tmp1w, src, #1 - strb1 tmp1w, dst, #1 + /* Medium copies: 17..96 bytes. */ + sub tmp1, count, 1 + ldp1 A_l, A_h, src + tbnz tmp1, 6, L(copy96) + ldp1 D_l, D_h, srcend, -16 + tbz tmp1, 5, 1f + ldp1 B_l, B_h, src, 16 + ldp1 C_l, C_h, srcend, -32 + stp1 B_l, B_h, dstin, 16 + stp1 C_l, C_h, dstend, -32 1: - tbz tmp2, #1, 2f - ldrh1 tmp1w, src, #2 - strh1 tmp1w, dst, #2 + stp1 A_l, A_h, dstin + stp1 D_l, D_h, dstend, -16 + copy_exit + + .p2align 4 + /* Small copies: 0..16 bytes. */ +L(copy16): + cmp count, 8 + b.lo 1f + ldr1 A_l, src + ldr1 A_h, srcend, -8 + str1 A_l, dstin + str1 A_h, dstend, -8 + copy_exit + .p2align 4 +1: + tbz count, 2, 1f + ldr1 A_lw, src + ldr1 A_hw, srcend, -4 + str1 A_lw, dstin + str1 A_hw, dstend, -4 + copy_exit + + /* Copy 0..3 bytes. Use a branchless sequence that copies the same + byte 3 times if count==1, or the 2nd byte twice if count==2. */ +1: + cbz count, 2f + lsr tmp1, count, 1 + ldrb1 A_lw, src + ldrb1 A_hw, srcend, -1 + ldrb1_reg B_lw, src, tmp1 + strb1 A_lw, dstin + strb1_reg B_lw, dstin, tmp1 + strb1 A_hw, dstend, -1 +2: copy_exit + + .p2align 4 + /* Copy 64..96 bytes. Copy 64 bytes from the start and + 32 bytes from the end. */ +L(copy96): + ldp1 B_l, B_h, src, 16 + ldp1 C_l, C_h, src, 32 + ldp1 D_l, D_h, src, 48 + ldp1 E_l, E_h, srcend, -32 + ldp1 F_l, F_h, srcend, -16 + stp1 A_l, A_h, dstin + stp1 B_l, B_h, dstin, 16 + stp1 C_l, C_h, dstin, 32 + stp1 D_l, D_h, dstin, 48 + stp1 E_l, E_h, dstend, -32 + stp1 F_l, F_h, dstend, -16 + copy_exit + + /* Align DST to 16 byte alignment so that we don't cross cache line + boundaries on both loads and stores. There are at least 96 bytes + to copy, so copy 16 bytes unaligned and then align. The loop + copies 64 bytes per iteration and prefetches one iteration ahead. */ + + .p2align 4 +L(copy_long): + and tmp1, dstin, 15 + bic dst, dstin, 15 + ldp1 D_l, D_h, src + sub src, src, tmp1 + add count, count, tmp1 /* Count is now 16 too large. */ + ldp1 A_l, A_h, src, 16 + stp1 D_l, D_h, dstin + ldp1 B_l, B_h, src, 32 + ldp1 C_l, C_h, src, 48 + ldp1_pre D_l, D_h, src, 64 + subs count, count, 128 + 16 /* Test and readjust count. */ + b.ls 2f +1: + stp1 A_l, A_h, dst, 16 + ldp1 A_l, A_h, src, 16 + stp1 B_l, B_h, dst, 32 + ldp1 B_l, B_h, src, 32 + stp1 C_l, C_h, dst, 48 + ldp1 C_l, C_h, src, 48 + stp1_pre D_l, D_h, dst, 64 + ldp1_pre D_l, D_h, src, 64 + subs count, count, 64 + b.hi 1b + + /* Write the last full set of 64 bytes. The remainder is at most 64 + bytes, so it is safe to always copy 64 bytes from the end even if + there is just 1 byte left. */ 2: - tbz tmp2, #2, 3f - ldr1 tmp1w, src, #4 - str1 tmp1w, dst, #4 -3: - tbz tmp2, #3, .LSrcAligned - ldr1 tmp1, src, #8 - str1 tmp1, dst, #8 - -.LSrcAligned: - cmp count, #64 - b.ge .Lcpy_over64 - /* - * Deal with small copies quickly by dropping straight into the - * exit block. - */ -.Ltail63: - /* - * Copy up to 48 bytes of data. At this point we only need the - * bottom 6 bits of count to be accurate. - */ - ands tmp1, count, #0x30 - b.eq .Ltiny15 - cmp tmp1w, #0x20 - b.eq 1f - b.lt 2f - ldp1 A_l, A_h, src, #16 - stp1 A_l, A_h, dst, #16 -1: - ldp1 A_l, A_h, src, #16 - stp1 A_l, A_h, dst, #16 -2: - ldp1 A_l, A_h, src, #16 - stp1 A_l, A_h, dst, #16 -.Ltiny15: - /* - * Prefer to break one ldp/stp into several load/store to access - * memory in an increasing address order,rather than to load/store 16 - * bytes from (src-16) to (dst-16) and to backward the src to aligned - * address,which way is used in original cortex memcpy. If keeping - * the original memcpy process here, memmove need to satisfy the - * precondition that src address is at least 16 bytes bigger than dst - * address,otherwise some source data will be overwritten when memove - * call memcpy directly. To make memmove simpler and decouple the - * memcpy's dependency on memmove, withdrew the original process. - */ - tbz count, #3, 1f - ldr1 tmp1, src, #8 - str1 tmp1, dst, #8 -1: - tbz count, #2, 2f - ldr1 tmp1w, src, #4 - str1 tmp1w, dst, #4 -2: - tbz count, #1, 3f - ldrh1 tmp1w, src, #2 - strh1 tmp1w, dst, #2 -3: - tbz count, #0, .Lexitfunc - ldrb1 tmp1w, src, #1 - strb1 tmp1w, dst, #1 - - b .Lexitfunc - -.Lcpy_over64: - subs count, count, #128 - b.ge .Lcpy_body_large - /* - * Less than 128 bytes to copy, so handle 64 here and then jump - * to the tail. - */ - ldp1 A_l, A_h, src, #16 - stp1 A_l, A_h, dst, #16 - ldp1 B_l, B_h, src, #16 - ldp1 C_l, C_h, src, #16 - stp1 B_l, B_h, dst, #16 - stp1 C_l, C_h, dst, #16 - ldp1 D_l, D_h, src, #16 - stp1 D_l, D_h, dst, #16 - - tst count, #0x3f - b.ne .Ltail63 - b .Lexitfunc - - /* - * Critical loop. Start at a new cache line boundary. Assuming - * 64 bytes per line this ensures the entire loop is in one line. - */ - .p2align L1_CACHE_SHIFT -.Lcpy_body_large: - /* pre-get 64 bytes data. */ - ldp1 A_l, A_h, src, #16 - ldp1 B_l, B_h, src, #16 - ldp1 C_l, C_h, src, #16 - ldp1 D_l, D_h, src, #16 -1: - /* - * interlace the load of next 64 bytes data block with store of the last - * loaded 64 bytes data. - */ - stp1 A_l, A_h, dst, #16 - ldp1 A_l, A_h, src, #16 - stp1 B_l, B_h, dst, #16 - ldp1 B_l, B_h, src, #16 - stp1 C_l, C_h, dst, #16 - ldp1 C_l, C_h, src, #16 - stp1 D_l, D_h, dst, #16 - ldp1 D_l, D_h, src, #16 - subs count, count, #64 - b.ge 1b - stp1 A_l, A_h, dst, #16 - stp1 B_l, B_h, dst, #16 - stp1 C_l, C_h, dst, #16 - stp1 D_l, D_h, dst, #16 - - tst count, #0x3f - b.ne .Ltail63 -.Lexitfunc: + ldp1 E_l, E_h, srcend, -64 + stp1 A_l, A_h, dst, 16 + ldp1 A_l, A_h, srcend, -48 + stp1 B_l, B_h, dst, 32 + ldp1 B_l, B_h, srcend, -32 + stp1 C_l, C_h, dst, 48 + ldp1 C_l, C_h, srcend, -16 + stp1 D_l, D_h, dst, 64 + stp1 E_l, E_h, dstend, -64 + stp1 A_l, A_h, dstend, -48 + stp1 B_l, B_h, dstend, -32 + stp1 C_l, C_h, dstend, -16 + copy_exit diff --git a/arch/arm64/lib/copy_template_user.S b/arch/arm64/lib/copy_template_user.S new file mode 100644 index 000000000000..3db24dcdab05 --- /dev/null +++ b/arch/arm64/lib/copy_template_user.S @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#define L(l) .L ## l + + alternative_if_not ARM64_HAS_UAO + b L(copy_non_uao) + alternative_else_nop_endif +#include "copy_template.S" + +#define ldp1 ldp1_nuao +#define ldp1_pre ldp1_pre_nuao +#define stp1 stp1_nuao +#define stp1_pre stp1_pre_nuao +#define ldr1 ldr1_nuao +#define str1 str1_nuao +#define ldrb1 ldrb1_nuao +#define strb1 strb1_nuao +#define ldrb1_reg ldrb1_nuao_reg +#define strb1_reg strb1_nuao_reg + +L(copy_non_uao): +#undef L +#define L(l) .Lnuao ## l +#include "copy_template.S" diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 2d88c736e8f2..d49db097bc31 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -19,51 +19,114 @@ * Returns: * x0 - bytes not copied */ - .macro ldrb1 ptr, regB, val - ldrb \ptr, [\regB], \val + + .macro ldrb1 reg, ptr, offset=0 + ldrb \reg, [\ptr, \offset] .endm - .macro strb1 ptr, regB, val - uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val + .macro strb1 reg, ptr, offset=0 + 8888: sttrb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldrh1 ptr, regB, val - ldrh \ptr, [\regB], \val + .macro ldrb1_reg reg, ptr, offset + ldrb \reg, [\ptr, \offset] .endm - .macro strh1 ptr, regB, val - uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val + .macro strb1_reg reg, ptr, offset + add \ptr, \ptr, \offset + 8888: sttrb \reg, [\ptr] + sub \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldr1 ptr, regB, val - ldr \ptr, [\regB], \val + .macro ldr1 reg, ptr, offset=0 + ldr \reg, [\ptr, \offset] .endm - .macro str1 ptr, regB, val - uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val + .macro str1 reg, ptr, offset=0 + 8888: sttr \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; .endm - .macro ldp1 ptr, regB, regC, val - ldp \ptr, \regB, [\regC], \val + .macro ldp1 regA, regB, ptr, offset=0 + ldp \regA, \regB, [\ptr, \offset] .endm - .macro stp1 ptr, regB, regC, val - uao_stp 9998f, \ptr, \regB, \regC, \val + .macro stp1 regA, regB, ptr, offset=0 + 8888: sttr \regA, [\ptr, \offset] + 8889: sttr \regB, [\ptr, \offset + 8] + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; + .endm + + .macro ldp1_pre regA, regB, ptr, offset + ldp \regA, \regB, [\ptr, \offset]! + .endm + + .macro stp1_pre regA, regB, ptr, offset + 8888: sttr \regA, [\ptr, \offset] + 8889: sttr \regB, [\ptr, \offset + 8] + add \ptr, \ptr, \offset + _asm_extable_faultaddr 8888b,9998f; + _asm_extable_faultaddr 8889b,9998f; + .endm + + .macro ldrb1_nuao reg, ptr, offset=0 + ldrb \reg, [\ptr, \offset] + .endm + + .macro strb1_nuao reg, ptr, offset=0 + 8888: strb \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldrb1_nuao_reg reg, ptr, offset=0 + ldrb \reg, [\ptr, \offset] + .endm + + .macro strb1_nuao_reg reg, ptr, offset=0 + strb \reg, [\ptr, \offset] + .endm + + .macro ldr1_nuao reg, ptr, offset=0 + ldr \reg, [\ptr, \offset] + .endm + + .macro str1_nuao reg, ptr, offset=0 + 8888: str \reg, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro ldp1_nuao regA, regB, ptr, offset=0 + ldp \regA, \regB, [\ptr, \offset] + .endm + + .macro ldp1_pre_nuao regA, regB, ptr, offset + ldp \regA, \regB, [\ptr, \offset]! + .endm + + .macro stp1_nuao regA, regB, ptr, offset=0 + 8888: stp \regA, \regB, [\ptr, \offset] + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro stp1_pre_nuao regA, regB, ptr, offset + 8888: stp \regA, \regB, [\ptr, \offset]! + _asm_extable_faultaddr 8888b,9998f; + .endm + + .macro copy_exit + b .Luaccess_finish .endm -end .req x5 ENTRY(__arch_copy_to_user) uaccess_enable_not_uao x3, x4, x5 - add end, x0, x2 -#include "copy_template.S" +#include "copy_template_user.S" +.Luaccess_finish: uaccess_disable_not_uao x3, x4 mov x0, #0 ret ENDPROC(__arch_copy_to_user) EXPORT_SYMBOL(__arch_copy_to_user) - - .section .fixup,"ax" - .align 2 -9998: sub x0, end, dst // bytes not copied - ret - .previous +#include "copy_user_fixup.S" diff --git a/arch/arm64/lib/copy_user_fixup.S b/arch/arm64/lib/copy_user_fixup.S new file mode 100644 index 000000000000..32fae9e2e799 --- /dev/null +++ b/arch/arm64/lib/copy_user_fixup.S @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +addr .req x15 +.section .fixup,"ax" +.align 2 +9998: + // If it falls in the src range then it was a load that failed, + // otherwise it was a store + cmp addr, src + ccmp addr, srcend, #0x0, ge + csel x0, srcend, dstend, lt + sub x0, x0, addr + ret + diff --git a/arch/arm64/lib/memcpy.S b/arch/arm64/lib/memcpy.S index d79f48994dbb..c4a2fe0d0317 100644 --- a/arch/arm64/lib/memcpy.S +++ b/arch/arm64/lib/memcpy.S @@ -24,43 +24,57 @@ * Returns: * x0 - dest */ - .macro ldrb1 ptr, regB, val - ldrb \ptr, [\regB], \val + + #define L(l) .L ## l + + .macro ldrb1 reg, ptr, offset=0 + ldrb \reg, [\ptr, \offset] .endm - .macro strb1 ptr, regB, val - strb \ptr, [\regB], \val + .macro strb1 reg, ptr, offset=0 + strb \reg, [\ptr, \offset] .endm - .macro ldrh1 ptr, regB, val - ldrh \ptr, [\regB], \val + .macro ldr1 reg, ptr, offset=0 + ldr \reg, [\ptr, \offset] .endm - .macro strh1 ptr, regB, val - strh \ptr, [\regB], \val + .macro str1 reg, ptr, offset=0 + str \reg, [\ptr, \offset] .endm - .macro ldr1 ptr, regB, val - ldr \ptr, [\regB], \val + .macro ldp1 regA, regB, ptr, offset=0 + ldp \regA, \regB, [\ptr, \offset] .endm - .macro str1 ptr, regB, val - str \ptr, [\regB], \val + .macro stp1 regA, regB, ptr, offset=0 + stp \regA, \regB, [\ptr, \offset] .endm - .macro ldp1 ptr, regB, regC, val - ldp \ptr, \regB, [\regC], \val + .macro ldrb1_reg reg, ptr, offset + ldrb1 \reg, \ptr, \offset .endm - .macro stp1 ptr, regB, regC, val - stp \ptr, \regB, [\regC], \val + .macro strb1_reg reg, ptr, offset + strb1 \reg, \ptr, \offset + .endm + + .macro ldp1_pre regA, regB, ptr, offset + ldp \regA, \regB, [\ptr, \offset]! + .endm + + .macro stp1_pre regA, regB, ptr, offset + stp \regA, \regB, [\ptr, \offset]! + .endm + + .macro copy_exit + ret .endm .weak memcpy ENTRY(__memcpy) ENTRY(memcpy) #include "copy_template.S" - ret ENDPIPROC(memcpy) EXPORT_SYMBOL(memcpy) ENDPROC(__memcpy) From patchwork Fri Oct 18 18:16:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199363 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C5755112B for ; Fri, 18 Oct 2019 18:17:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D50220820 for ; Fri, 18 Oct 2019 18:17:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="lFMS4vpr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D50220820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dvMOoDoTs+MhKTcs6xEKXZjNXpBqd/57CzsJgf88lwI=; b=lFMS4vprAGSOsP AwV1VU1mWTwKN5s4vyKTT2otIlfAndJucRmmamRHMJoeICSBNu1wGOx3YIY4VnYfe2E0+Qk3NlMKJ u/gCy42G4Qkn1P0g1xsw2L5aTTDv7vqVsMlIq1fjODq85pYDtxhzCuLZqgJ+ImpWaRGY+cNnypWlm o/xrIDxwUw8rpVJl/mdhuLA0fdaL9XTMreMjG1wxB+VofybBoRVgcTJYupfaTG2SJZD6CgeLJufk5 asP7Dg/7tzQKF52qY4O6JFcNP9AfUQ2cc65alTuwhD2DBTURB+iSni8B/GYaAdADD8/9X1DjwOK5D NII/3KAaJoQ6r9N+OygA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWoy-0008D8-9z; Fri, 18 Oct 2019 18:17:52 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWo5-0007SW-Jh for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06DEE15BE; Fri, 18 Oct 2019 11:16:51 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3A3493F718; Fri, 18 Oct 2019 11:16:50 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 3/8] arm64: Import latest version of Cortex Strings' memcmp Date: Fri, 18 Oct 2019 19:16:37 +0100 Message-Id: <0c0860cb51272de5b73f213960e6f0ae814b017a.1571421836.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111657_742940_B15D8F68 X-CRM114-Status: GOOD ( 22.00 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import the latest version of Cortex Strings' memcmp function. The upstream source is src/aarch64/memcmp.S as of commit f77e4c932b4f in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: update attribution, expand commit message ] Signed-off-by: Robin Murphy --- arch/arm64/lib/memcmp.S | 317 ++++++++++++++-------------------------- 1 file changed, 109 insertions(+), 208 deletions(-) diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S index b297bdaaf549..728dcf5a3673 100644 --- a/arch/arm64/lib/memcmp.S +++ b/arch/arm64/lib/memcmp.S @@ -1,13 +1,12 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2013, 2018 Linaro Limited. All rights reserved. + * Copyright (c) 2017 ARM Ltd. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ #include @@ -25,223 +24,125 @@ * x0 - a compare result, maybe less than, equal to, or greater than ZERO */ +#define L(l) .L ## l + /* Parameters and result. */ -src1 .req x0 -src2 .req x1 -limit .req x2 -result .req x0 +#define src1 x0 +#define src2 x1 +#define limit x2 +#define result w0 /* Internal variables. */ -data1 .req x3 -data1w .req w3 -data2 .req x4 -data2w .req w4 -has_nul .req x5 -diff .req x6 -endloop .req x7 -tmp1 .req x8 -tmp2 .req x9 -tmp3 .req x10 -pos .req x11 -limit_wd .req x12 -mask .req x13 +#define data1 x3 +#define data1w w3 +#define data1h x4 +#define data2 x5 +#define data2w w5 +#define data2h x6 +#define tmp1 x7 +#define tmp2 x8 WEAK(memcmp) - cbz limit, .Lret0 - eor tmp1, src1, src2 - tst tmp1, #7 - b.ne .Lmisaligned8 - ands tmp1, src1, #7 - b.ne .Lmutual_align - sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ - lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */ - /* - * The input source addresses are at alignment boundary. - * Directly compare eight bytes each time. - */ -.Lloop_aligned: - ldr data1, [src1], #8 - ldr data2, [src2], #8 -.Lstart_realigned: - subs limit_wd, limit_wd, #1 - eor diff, data1, data2 /* Non-zero if differences found. */ - csinv endloop, diff, xzr, cs /* Last Dword or differences. */ - cbz endloop, .Lloop_aligned + subs limit, limit, 8 + b.lo L(less8) - /* Not reached the limit, must have found a diff. */ - tbz limit_wd, #63, .Lnot_limit + ldr data1, [src1], 8 + ldr data2, [src2], 8 + cmp data1, data2 + b.ne L(return) - /* Limit % 8 == 0 => the diff is in the last 8 bytes. */ - ands limit, limit, #7 - b.eq .Lnot_limit - /* - * The remained bytes less than 8. It is needed to extract valid data - * from last eight bytes of the intended memory range. - */ - lsl limit, limit, #3 /* bytes-> bits. */ - mov mask, #~0 -CPU_BE( lsr mask, mask, limit ) -CPU_LE( lsl mask, mask, limit ) - bic data1, data1, mask - bic data2, data2, mask + subs limit, limit, 8 + b.gt L(more16) - orr diff, diff, mask - b .Lnot_limit + ldr data1, [src1, limit] + ldr data2, [src2, limit] + b L(return) -.Lmutual_align: - /* - * Sources are mutually aligned, but are not currently at an - * alignment boundary. Round down the addresses and then mask off - * the bytes that precede the start point. - */ - bic src1, src1, #7 - bic src2, src2, #7 - ldr data1, [src1], #8 - ldr data2, [src2], #8 - /* - * We can not add limit with alignment offset(tmp1) here. Since the - * addition probably make the limit overflown. - */ - sub limit_wd, limit, #1/*limit != 0, so no underflow.*/ - and tmp3, limit_wd, #7 - lsr limit_wd, limit_wd, #3 - add tmp3, tmp3, tmp1 - add limit_wd, limit_wd, tmp3, lsr #3 - add limit, limit, tmp1/* Adjust the limit for the extra. */ +L(more16): + ldr data1, [src1], 8 + ldr data2, [src2], 8 + cmp data1, data2 + bne L(return) - lsl tmp1, tmp1, #3/* Bytes beyond alignment -> bits.*/ - neg tmp1, tmp1/* Bits to alignment -64. */ - mov tmp2, #~0 - /*mask off the non-intended bytes before the start address.*/ -CPU_BE( lsl tmp2, tmp2, tmp1 )/*Big-endian.Early bytes are at MSB*/ - /* Little-endian. Early bytes are at LSB. */ -CPU_LE( lsr tmp2, tmp2, tmp1 ) + /* Jump directly to comparing the last 16 bytes for 32 byte (or less) + strings. */ + subs limit, limit, 16 + b.ls L(last_bytes) - orr data1, data1, tmp2 - orr data2, data2, tmp2 - b .Lstart_realigned + /* We overlap loads between 0-32 bytes at either side of SRC1 when we + try to align, so limit it only to strings larger than 128 bytes. */ + cmp limit, 96 + b.ls L(loop16) - /*src1 and src2 have different alignment offset.*/ -.Lmisaligned8: - cmp limit, #8 - b.lo .Ltiny8proc /*limit < 8: compare byte by byte*/ + /* Align src1 and adjust src2 with bytes not yet done. */ + and tmp1, src1, 15 + add limit, limit, tmp1 + sub src1, src1, tmp1 + sub src2, src2, tmp1 - and tmp1, src1, #7 - neg tmp1, tmp1 - add tmp1, tmp1, #8/*valid length in the first 8 bytes of src1*/ - and tmp2, src2, #7 - neg tmp2, tmp2 - add tmp2, tmp2, #8/*valid length in the first 8 bytes of src2*/ - subs tmp3, tmp1, tmp2 - csel pos, tmp1, tmp2, hi /*Choose the maximum.*/ + /* Loop performing 16 bytes per iteration using aligned src1. + Limit is pre-decremented by 16 and must be larger than zero. + Exit if <= 16 bytes left to do or if the data is not equal. */ + .p2align 4 +L(loop16): + ldp data1, data1h, [src1], 16 + ldp data2, data2h, [src2], 16 + subs limit, limit, 16 + ccmp data1, data2, 0, hi + ccmp data1h, data2h, 0, eq + b.eq L(loop16) - sub limit, limit, pos - /*compare the proceeding bytes in the first 8 byte segment.*/ -.Ltinycmp: - ldrb data1w, [src1], #1 - ldrb data2w, [src2], #1 - subs pos, pos, #1 - ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */ - b.eq .Ltinycmp - cbnz pos, 1f /*diff occurred before the last byte.*/ + cmp data1, data2 + bne L(return) + mov data1, data1h + mov data2, data2h + cmp data1, data2 + bne L(return) + + /* Compare last 1-16 bytes using unaligned access. */ +L(last_bytes): + add src1, src1, limit + add src2, src2, limit + ldp data1, data1h, [src1] + ldp data2, data2h, [src2] + cmp data1, data2 + bne L(return) + mov data1, data1h + mov data2, data2h + cmp data1, data2 + + /* Compare data bytes and set return value to 0, -1 or 1. */ +L(return): +#ifndef __AARCH64EB__ + rev data1, data1 + rev data2, data2 +#endif + cmp data1, data2 +L(ret_eq): + cset result, ne + cneg result, result, lo + ret + + .p2align 4 + /* Compare up to 8 bytes. Limit is [-8..-1]. */ +L(less8): + adds limit, limit, 4 + b.lo L(less4) + ldr data1w, [src1], 4 + ldr data2w, [src2], 4 cmp data1w, data2w - b.eq .Lstart_align -1: - sub result, data1, data2 - ret - -.Lstart_align: - lsr limit_wd, limit, #3 - cbz limit_wd, .Lremain8 - - ands xzr, src1, #7 - b.eq .Lrecal_offset - /*process more leading bytes to make src1 aligned...*/ - add src1, src1, tmp3 /*backwards src1 to alignment boundary*/ - add src2, src2, tmp3 - sub limit, limit, tmp3 - lsr limit_wd, limit, #3 - cbz limit_wd, .Lremain8 - /*load 8 bytes from aligned SRC1..*/ - ldr data1, [src1], #8 - ldr data2, [src2], #8 - - subs limit_wd, limit_wd, #1 - eor diff, data1, data2 /*Non-zero if differences found.*/ - csinv endloop, diff, xzr, ne - cbnz endloop, .Lunequal_proc - /*How far is the current SRC2 from the alignment boundary...*/ - and tmp3, tmp3, #7 - -.Lrecal_offset:/*src1 is aligned now..*/ - neg pos, tmp3 -.Lloopcmp_proc: - /* - * Divide the eight bytes into two parts. First,backwards the src2 - * to an alignment boundary,load eight bytes and compare from - * the SRC2 alignment boundary. If all 8 bytes are equal,then start - * the second part's comparison. Otherwise finish the comparison. - * This special handle can garantee all the accesses are in the - * thread/task space in avoid to overrange access. - */ - ldr data1, [src1,pos] - ldr data2, [src2,pos] - eor diff, data1, data2 /* Non-zero if differences found. */ - cbnz diff, .Lnot_limit - - /*The second part process*/ - ldr data1, [src1], #8 - ldr data2, [src2], #8 - eor diff, data1, data2 /* Non-zero if differences found. */ - subs limit_wd, limit_wd, #1 - csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ - cbz endloop, .Lloopcmp_proc -.Lunequal_proc: - cbz diff, .Lremain8 - -/* There is difference occurred in the latest comparison. */ -.Lnot_limit: -/* -* For little endian,reverse the low significant equal bits into MSB,then -* following CLZ can find how many equal bits exist. -*/ -CPU_LE( rev diff, diff ) -CPU_LE( rev data1, data1 ) -CPU_LE( rev data2, data2 ) - - /* - * The MS-non-zero bit of DIFF marks either the first bit - * that is different, or the end of the significant data. - * Shifting left now will bring the critical information into the - * top bits. - */ - clz pos, diff - lsl data1, data1, pos - lsl data2, data2, pos - /* - * We need to zero-extend (char is unsigned) the value and then - * perform a signed subtraction. - */ - lsr data1, data1, #56 - sub result, data1, data2, lsr #56 - ret - -.Lremain8: - /* Limit % 8 == 0 =>. all data are equal.*/ - ands limit, limit, #7 - b.eq .Lret0 - -.Ltiny8proc: - ldrb data1w, [src1], #1 - ldrb data2w, [src2], #1 - subs limit, limit, #1 - - ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */ - b.eq .Ltiny8proc - sub result, data1, data2 - ret -.Lret0: - mov result, #0 + b.ne L(return) + sub limit, limit, 4 +L(less4): + adds limit, limit, 4 + beq L(ret_eq) +L(byte_loop): + ldrb data1w, [src1], 1 + ldrb data2w, [src2], 1 + subs limit, limit, 1 + ccmp data1w, data2w, 0, ne /* NZCV = 0b0000. */ + b.eq L(byte_loop) + sub result, data1w, data2w ret ENDPIPROC(memcmp) EXPORT_SYMBOL_NOKASAN(memcmp) From patchwork Fri Oct 18 18:16:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D579B112B for ; Fri, 18 Oct 2019 18:17:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 964D920820 for ; Fri, 18 Oct 2019 18:17:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="USJT7CAU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 964D920820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NSe9tm/udljQDxkOEjIk0NB52ImjbenzNyIfTuVe/Bg=; b=USJT7CAUaHkD6y Dsvt3M+JwU9qPNCXxIZgp9AyaZmuCK7biWTYH43tRaHPES082YoBOXWbH60njLbUkRHTspKfTrkCf BpNAkIX6Fa5/ZPc2hcQ1v4u3EIT702oVyW3ktZlJ0CQ3FMD2nl4E/Yptf8fUGttpxopEI671XklXk msR1yQl0Vim/QI7Eji/XUCRuruo3zBy5bQROoC2a0wJkuPMuZyfdCOEQshuqP69+TAY5kQBH6DMrc YqN3MOE9AlQscoKJqLtGfsXkJLxd+c7vcOvtLzBMTXDRJF5P41HH9msUQUClmxHouHhMgouU9i3ox GM3icFKyL60bRb2TAdlA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWob-0007tV-OA; Fri, 18 Oct 2019 18:17:29 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWo5-0007SY-Jg for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07ED815DB; Fri, 18 Oct 2019 11:16:52 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B9623F718; Fri, 18 Oct 2019 11:16:51 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 4/8] arm64: Import latest version of Cortex Strings' memmove Date: Fri, 18 Oct 2019 19:16:38 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111657_749564_4D04DB58 X-CRM114-Status: GOOD ( 16.37 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import the latest version of Cortex Strings' memmove function. The upstream source is src/aarch64/memmove.S as of commit 99b01ddb8e41 in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: update attribution, expand commit message ] Signed-off-by: Robin Murphy --- v2: remember to convert memcpy() call to __memcpy() arch/arm64/lib/memmove.S | 236 +++++++++++++-------------------------- 1 file changed, 80 insertions(+), 156 deletions(-) diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S index 784775136480..40923f14b34e 100644 --- a/arch/arm64/lib/memmove.S +++ b/arch/arm64/lib/memmove.S @@ -1,13 +1,12 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2013 Linaro Limited. All rights reserved. + * Copyright (c) 2015 ARM Ltd. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ #include @@ -25,165 +24,90 @@ * Returns: * x0 - dest */ -dstin .req x0 -src .req x1 -count .req x2 -tmp1 .req x3 -tmp1w .req w3 -tmp2 .req x4 -tmp2w .req w4 -tmp3 .req x5 -tmp3w .req w5 -dst .req x6 +/* Parameters and result. */ +#define dstin x0 +#define src x1 +#define count x2 +#define srcend x3 +#define dstend x4 +#define tmp1 x5 +#define A_l x6 +#define A_h x7 +#define B_l x8 +#define B_h x9 +#define C_l x10 +#define C_h x11 +#define D_l x12 +#define D_h x13 +#define E_l count +#define E_h tmp1 -A_l .req x7 -A_h .req x8 -B_l .req x9 -B_h .req x10 -C_l .req x11 -C_h .req x12 -D_l .req x13 -D_h .req x14 +/* All memmoves up to 96 bytes are done by memcpy as it supports overlaps. + Larger backwards copies are also handled by memcpy. The only remaining + case is forward large copies. The destination is aligned, and an + unrolled loop processes 64 bytes per iteration. +*/ - .weak memmove + .weak memmove ENTRY(__memmove) ENTRY(memmove) - cmp dstin, src - b.lo __memcpy - add tmp1, src, count - cmp dstin, tmp1 - b.hs __memcpy /* No overlap. */ + sub tmp1, dstin, src + cmp count, 96 + ccmp tmp1, count, 2, hi + b.hs __memcpy - add dst, dstin, count - add src, src, count - cmp count, #16 - b.lo .Ltail15 /*probably non-alignment accesses.*/ + cbz tmp1, 3f + add dstend, dstin, count + add srcend, src, count - ands tmp2, src, #15 /* Bytes to reach alignment. */ - b.eq .LSrcAligned - sub count, count, tmp2 - /* - * process the aligned offset length to make the src aligned firstly. - * those extra instructions' cost is acceptable. It also make the - * coming accesses are based on aligned address. - */ - tbz tmp2, #0, 1f - ldrb tmp1w, [src, #-1]! - strb tmp1w, [dst, #-1]! + /* Align dstend to 16 byte alignment so that we don't cross cache line + boundaries on both loads and stores. There are at least 96 bytes + to copy, so copy 16 bytes unaligned and then align. The loop + copies 64 bytes per iteration and prefetches one iteration ahead. */ + + and tmp1, dstend, 15 + ldp D_l, D_h, [srcend, -16] + sub srcend, srcend, tmp1 + sub count, count, tmp1 + ldp A_l, A_h, [srcend, -16] + stp D_l, D_h, [dstend, -16] + ldp B_l, B_h, [srcend, -32] + ldp C_l, C_h, [srcend, -48] + ldp D_l, D_h, [srcend, -64]! + sub dstend, dstend, tmp1 + subs count, count, 128 + b.ls 2f + nop 1: - tbz tmp2, #1, 2f - ldrh tmp1w, [src, #-2]! - strh tmp1w, [dst, #-2]! + stp A_l, A_h, [dstend, -16] + ldp A_l, A_h, [srcend, -16] + stp B_l, B_h, [dstend, -32] + ldp B_l, B_h, [srcend, -32] + stp C_l, C_h, [dstend, -48] + ldp C_l, C_h, [srcend, -48] + stp D_l, D_h, [dstend, -64]! + ldp D_l, D_h, [srcend, -64]! + subs count, count, 64 + b.hi 1b + + /* Write the last full set of 64 bytes. The remainder is at most 64 + bytes, so it is safe to always copy 64 bytes from the start even if + there is just 1 byte left. */ 2: - tbz tmp2, #2, 3f - ldr tmp1w, [src, #-4]! - str tmp1w, [dst, #-4]! -3: - tbz tmp2, #3, .LSrcAligned - ldr tmp1, [src, #-8]! - str tmp1, [dst, #-8]! + ldp E_l, E_h, [src, 48] + stp A_l, A_h, [dstend, -16] + ldp A_l, A_h, [src, 32] + stp B_l, B_h, [dstend, -32] + ldp B_l, B_h, [src, 16] + stp C_l, C_h, [dstend, -48] + ldp C_l, C_h, [src] + stp D_l, D_h, [dstend, -64] + stp E_l, E_h, [dstin, 48] + stp A_l, A_h, [dstin, 32] + stp B_l, B_h, [dstin, 16] + stp C_l, C_h, [dstin] +3: ret -.LSrcAligned: - cmp count, #64 - b.ge .Lcpy_over64 - - /* - * Deal with small copies quickly by dropping straight into the - * exit block. - */ -.Ltail63: - /* - * Copy up to 48 bytes of data. At this point we only need the - * bottom 6 bits of count to be accurate. - */ - ands tmp1, count, #0x30 - b.eq .Ltail15 - cmp tmp1w, #0x20 - b.eq 1f - b.lt 2f - ldp A_l, A_h, [src, #-16]! - stp A_l, A_h, [dst, #-16]! -1: - ldp A_l, A_h, [src, #-16]! - stp A_l, A_h, [dst, #-16]! -2: - ldp A_l, A_h, [src, #-16]! - stp A_l, A_h, [dst, #-16]! - -.Ltail15: - tbz count, #3, 1f - ldr tmp1, [src, #-8]! - str tmp1, [dst, #-8]! -1: - tbz count, #2, 2f - ldr tmp1w, [src, #-4]! - str tmp1w, [dst, #-4]! -2: - tbz count, #1, 3f - ldrh tmp1w, [src, #-2]! - strh tmp1w, [dst, #-2]! -3: - tbz count, #0, .Lexitfunc - ldrb tmp1w, [src, #-1] - strb tmp1w, [dst, #-1] - -.Lexitfunc: - ret - -.Lcpy_over64: - subs count, count, #128 - b.ge .Lcpy_body_large - /* - * Less than 128 bytes to copy, so handle 64 bytes here and then jump - * to the tail. - */ - ldp A_l, A_h, [src, #-16] - stp A_l, A_h, [dst, #-16] - ldp B_l, B_h, [src, #-32] - ldp C_l, C_h, [src, #-48] - stp B_l, B_h, [dst, #-32] - stp C_l, C_h, [dst, #-48] - ldp D_l, D_h, [src, #-64]! - stp D_l, D_h, [dst, #-64]! - - tst count, #0x3f - b.ne .Ltail63 - ret - - /* - * Critical loop. Start at a new cache line boundary. Assuming - * 64 bytes per line this ensures the entire loop is in one line. - */ - .p2align L1_CACHE_SHIFT -.Lcpy_body_large: - /* pre-load 64 bytes data. */ - ldp A_l, A_h, [src, #-16] - ldp B_l, B_h, [src, #-32] - ldp C_l, C_h, [src, #-48] - ldp D_l, D_h, [src, #-64]! -1: - /* - * interlace the load of next 64 bytes data block with store of the last - * loaded 64 bytes data. - */ - stp A_l, A_h, [dst, #-16] - ldp A_l, A_h, [src, #-16] - stp B_l, B_h, [dst, #-32] - ldp B_l, B_h, [src, #-32] - stp C_l, C_h, [dst, #-48] - ldp C_l, C_h, [src, #-48] - stp D_l, D_h, [dst, #-64]! - ldp D_l, D_h, [src, #-64]! - subs count, count, #64 - b.ge 1b - stp A_l, A_h, [dst, #-16] - stp B_l, B_h, [dst, #-32] - stp C_l, C_h, [dst, #-48] - stp D_l, D_h, [dst, #-64]! - - tst count, #0x3f - b.ne .Ltail63 - ret ENDPIPROC(memmove) EXPORT_SYMBOL(memmove) ENDPROC(__memmove) From patchwork Fri Oct 18 18:16:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E3A31575 for ; Fri, 18 Oct 2019 18:18:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 23FE220820 for ; Fri, 18 Oct 2019 18:18:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="tqcwXcOd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 23FE220820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jeG2nDxKkFlh4RzIKhOxdPlxo8WdyXrtFVd+A6JN6q0=; b=tqcwXcOdquFTFm 9/C21DY/ps9MRwKsCppqW/EDnxaqMcbIhzoQ5nu4nGA5hZdD06SkNqFER3mVkHs1cZmr7RkiCpnb8 /LPqw837Jk6YlJbSYVtOOG8T7Ce+m5mi+XXrSG+6LIt4rJ0uUcX0FZ4iT9yMaozkEjvS9S8UTK+To YvUyjyUdvatIxjJuifib61StucbAoCXGdp3Eqz7yN0LhI/FMx2HBC3bEd8/jlsZiZOYsILGeDRKH8 dh3ODE//GVLs3wPcE+oy0wdykNHUnpyeXLmSAC8dR/AXjRXao10EODXkcrB2nO2e0hnBYe8QQBvTV paCUptQDbmpQepTvs+Og==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWpo-0000do-F3; Fri, 18 Oct 2019 18:18:44 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWoH-0007UM-KQ for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0995415EC; Fri, 18 Oct 2019 11:16:53 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D35F3F718; Fri, 18 Oct 2019 11:16:52 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 5/8] arm64: Import latest version of Cortex Strings' strcmp Date: Fri, 18 Oct 2019 19:16:39 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111709_826354_1A8B9751 X-CRM114-Status: GOOD ( 20.25 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import the latest version of Cortex Strings' strcmp function. The upstream source is src/aarch64/strcmp.S as of commit 90b61261ceb4 in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: update attribution, expand commit message ] Signed-off-by: Robin Murphy --- v2: fix shift argument typo arch/arm64/lib/strcmp.S | 278 +++++++++++++++++----------------------- 1 file changed, 116 insertions(+), 162 deletions(-) diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S index e9aefbe0b740..2f0b000d044c 100644 --- a/arch/arm64/lib/strcmp.S +++ b/arch/arm64/lib/strcmp.S @@ -1,13 +1,11 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2012,2018 Linaro Limited. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ #include @@ -25,60 +23,106 @@ * or be greater than s2. */ +#define L(label) .L ## label + #define REP8_01 0x0101010101010101 #define REP8_7f 0x7f7f7f7f7f7f7f7f #define REP8_80 0x8080808080808080 /* Parameters and result. */ -src1 .req x0 -src2 .req x1 -result .req x0 +#define src1 x0 +#define src2 x1 +#define result x0 /* Internal variables. */ -data1 .req x2 -data1w .req w2 -data2 .req x3 -data2w .req w3 -has_nul .req x4 -diff .req x5 -syndrome .req x6 -tmp1 .req x7 -tmp2 .req x8 -tmp3 .req x9 -zeroones .req x10 -pos .req x11 +#define data1 x2 +#define data1w w2 +#define data2 x3 +#define data2w w3 +#define has_nul x4 +#define diff x5 +#define syndrome x6 +#define tmp1 x7 +#define tmp2 x8 +#define tmp3 x9 +#define zeroones x10 +#define pos x11 + /* Start of performance-critical section -- one 64B cache line. */ WEAK(strcmp) eor tmp1, src1, src2 mov zeroones, #REP8_01 tst tmp1, #7 - b.ne .Lmisaligned8 + b.ne L(misaligned8) ands tmp1, src1, #7 - b.ne .Lmutual_align - - /* - * NUL detection works on the principle that (X - 1) & (~X) & 0x80 - * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and - * can be done in parallel across the entire word. - */ -.Lloop_aligned: + b.ne L(mutual_align) + /* NUL detection works on the principle that (X - 1) & (~X) & 0x80 + (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + can be done in parallel across the entire word. */ +L(loop_aligned): ldr data1, [src1], #8 ldr data2, [src2], #8 -.Lstart_realigned: +L(start_realigned): sub tmp1, data1, zeroones orr tmp2, data1, #REP8_7f eor diff, data1, data2 /* Non-zero if differences found. */ bic has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ orr syndrome, diff, has_nul - cbz syndrome, .Lloop_aligned - b .Lcal_cmpresult + cbz syndrome, L(loop_aligned) + /* End of performance-critical section -- one 64B cache line. */ -.Lmutual_align: - /* - * Sources are mutually aligned, but are not currently at an - * alignment boundary. Round down the addresses and then mask off - * the bytes that preceed the start point. - */ +L(end): +CPU_LE(rev syndrome, syndrome) +CPU_LE(rev data1, data1) + /* The MS-non-zero bit of the syndrome marks either the first bit + that is different, or the top bit of the first zero byte. + Shifting left now will bring the critical information into the + top bits. */ +CPU_LE(clz pos, syndrome) +CPU_LE(rev data2, data2) +CPU_LE(lsl data1, data1, pos) +CPU_LE(lsl data2, data2, pos) + /* But we need to zero-extend (char is unsigned) the value and then + perform a signed 32-bit subtraction. */ +CPU_LE(lsr data1, data1, #56) +CPU_LE(sub result, data1, data2, lsr #56) +CPU_LE(ret) + /* For big-endian we cannot use the trick with the syndrome value + as carry-propagation can corrupt the upper bits if the trailing + bytes in the string contain 0x01. */ + /* However, if there is no NUL byte in the dword, we can generate + the result directly. We can't just subtract the bytes as the + MSB might be significant. */ +CPU_BE(cbnz has_nul, 1f) +CPU_BE(cmp data1, data2) +CPU_BE(cset result, ne) +CPU_BE(cneg result, result, lo) +CPU_BE(ret) +1: + /* Re-compute the NUL-byte detection, using a byte-reversed value. */ +CPU_BE(rev tmp3, data1) +CPU_BE(sub tmp1, tmp3, zeroones) +CPU_BE(orr tmp2, tmp3, #REP8_7f) +CPU_BE(bic has_nul, tmp1, tmp2) +CPU_BE(rev has_nul, has_nul) +CPU_BE(orr syndrome, diff, has_nul) +CPU_BE(clz pos, syndrome) + /* The MS-non-zero bit of the syndrome marks either the first bit + that is different, or the top bit of the first zero byte. + Shifting left now will bring the critical information into the + top bits. */ +CPU_BE(lsl data1, data1, pos) +CPU_BE(lsl data2, data2, pos) + /* But we need to zero-extend (char is unsigned) the value and then + perform a signed 32-bit subtraction. */ +CPU_BE(lsr data1, data1, #56) +CPU_BE(sub result, data1, data2, lsr #56) +CPU_BE(ret) + +L(mutual_align): + /* Sources are mutually aligned, but are not currently at an + alignment boundary. Round down the addresses and then mask off + the bytes that preceed the start point. */ bic src1, src1, #7 bic src2, src2, #7 lsl tmp1, tmp1, #3 /* Bytes beyond alignment -> bits. */ @@ -87,137 +131,47 @@ WEAK(strcmp) ldr data2, [src2], #8 mov tmp2, #~0 /* Big-endian. Early bytes are at MSB. */ -CPU_BE( lsl tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ +CPU_BE(lsl tmp2, tmp2, tmp1) /* Shift (tmp1 & 63). */ /* Little-endian. Early bytes are at LSB. */ -CPU_LE( lsr tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ - +CPU_LE(lsr tmp2, tmp2, tmp1) /* Shift (tmp1 & 63). */ orr data1, data1, tmp2 orr data2, data2, tmp2 - b .Lstart_realigned + b L(start_realigned) -.Lmisaligned8: - /* - * Get the align offset length to compare per byte first. - * After this process, one string's address will be aligned. - */ - and tmp1, src1, #7 - neg tmp1, tmp1 - add tmp1, tmp1, #8 - and tmp2, src2, #7 - neg tmp2, tmp2 - add tmp2, tmp2, #8 - subs tmp3, tmp1, tmp2 - csel pos, tmp1, tmp2, hi /*Choose the maximum. */ -.Ltinycmp: +L(misaligned8): + /* Align SRC1 to 8 bytes and then compare 8 bytes at a time, always + checking to make sure that we don't access beyond page boundary in + SRC2. */ + tst src1, #7 + b.eq L(loop_misaligned) +L(do_misaligned): ldrb data1w, [src1], #1 ldrb data2w, [src2], #1 - subs pos, pos, #1 - ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ - ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ - b.eq .Ltinycmp - cbnz pos, 1f /*find the null or unequal...*/ cmp data1w, #1 - ccmp data1w, data2w, #0, cs - b.eq .Lstart_align /*the last bytes are equal....*/ -1: + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.ne L(done) + tst src1, #7 + b.ne L(do_misaligned) + +L(loop_misaligned): + /* Test if we are within the last dword of the end of a 4K page. If + yes then jump back to the misaligned loop to copy a byte at a time. */ + and tmp1, src2, #0xff8 + eor tmp1, tmp1, #0xff8 + cbz tmp1, L(do_misaligned) + ldr data1, [src1], #8 + ldr data2, [src2], #8 + + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + bic has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + orr syndrome, diff, has_nul + cbz syndrome, L(loop_misaligned) + b L(end) + +L(done): sub result, data1, data2 ret - -.Lstart_align: - ands xzr, src1, #7 - b.eq .Lrecal_offset - /*process more leading bytes to make str1 aligned...*/ - add src1, src1, tmp3 - add src2, src2, tmp3 - /*load 8 bytes from aligned str1 and non-aligned str2..*/ - ldr data1, [src1], #8 - ldr data2, [src2], #8 - - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - bic has_nul, tmp1, tmp2 - eor diff, data1, data2 /* Non-zero if differences found. */ - orr syndrome, diff, has_nul - cbnz syndrome, .Lcal_cmpresult - /*How far is the current str2 from the alignment boundary...*/ - and tmp3, tmp3, #7 -.Lrecal_offset: - neg pos, tmp3 -.Lloopcmp_proc: - /* - * Divide the eight bytes into two parts. First,backwards the src2 - * to an alignment boundary,load eight bytes from the SRC2 alignment - * boundary,then compare with the relative bytes from SRC1. - * If all 8 bytes are equal,then start the second part's comparison. - * Otherwise finish the comparison. - * This special handle can garantee all the accesses are in the - * thread/task space in avoid to overrange access. - */ - ldr data1, [src1,pos] - ldr data2, [src2,pos] - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - bic has_nul, tmp1, tmp2 - eor diff, data1, data2 /* Non-zero if differences found. */ - orr syndrome, diff, has_nul - cbnz syndrome, .Lcal_cmpresult - - /*The second part process*/ - ldr data1, [src1], #8 - ldr data2, [src2], #8 - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - bic has_nul, tmp1, tmp2 - eor diff, data1, data2 /* Non-zero if differences found. */ - orr syndrome, diff, has_nul - cbz syndrome, .Lloopcmp_proc - -.Lcal_cmpresult: - /* - * reversed the byte-order as big-endian,then CLZ can find the most - * significant zero bits. - */ -CPU_LE( rev syndrome, syndrome ) -CPU_LE( rev data1, data1 ) -CPU_LE( rev data2, data2 ) - - /* - * For big-endian we cannot use the trick with the syndrome value - * as carry-propagation can corrupt the upper bits if the trailing - * bytes in the string contain 0x01. - * However, if there is no NUL byte in the dword, we can generate - * the result directly. We ca not just subtract the bytes as the - * MSB might be significant. - */ -CPU_BE( cbnz has_nul, 1f ) -CPU_BE( cmp data1, data2 ) -CPU_BE( cset result, ne ) -CPU_BE( cneg result, result, lo ) -CPU_BE( ret ) -CPU_BE( 1: ) - /*Re-compute the NUL-byte detection, using a byte-reversed value. */ -CPU_BE( rev tmp3, data1 ) -CPU_BE( sub tmp1, tmp3, zeroones ) -CPU_BE( orr tmp2, tmp3, #REP8_7f ) -CPU_BE( bic has_nul, tmp1, tmp2 ) -CPU_BE( rev has_nul, has_nul ) -CPU_BE( orr syndrome, diff, has_nul ) - - clz pos, syndrome - /* - * The MS-non-zero bit of the syndrome marks either the first bit - * that is different, or the top bit of the first zero byte. - * Shifting left now will bring the critical information into the - * top bits. - */ - lsl data1, data1, pos - lsl data2, data2, pos - /* - * But we need to zero-extend (char is unsigned) the value and then - * perform a signed 32-bit subtraction. - */ - lsr data1, data1, #56 - sub result, data1, data2, lsr #56 - ret ENDPIPROC(strcmp) EXPORT_SYMBOL_NOKASAN(strcmp) From patchwork Fri Oct 18 18:16:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199377 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EBC001575 for ; Fri, 18 Oct 2019 18:19:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C5E9521835 for ; Fri, 18 Oct 2019 18:19:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gtWUD4Lx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5E9521835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2K/j9NxC645B9qRyo04PpNBMl/SKSVSnqtpnZt6ElRc=; b=gtWUD4Lxa0/fFx fTPYxBeztBqhwrl8FO8ZyvHSLx2S6jSnQx/oYxxRKMrT8FMUiAmQX+TZ2kFK/IikMJE2kmYHFf5xX e18WoW/gmzz79lQTBx4DHsZqDze1ZOWzC1uYz4eHRbnL8YAAEnEuplZMTC3TqVWklZ5QO6NKny8WJ 0iBXUmMyL0/NTfB5/P1opkZRRbHtPurekqhiAEBivrT9yheLLsALdXT98822qdUJaJEWXOUrLo6M/ uwiw4mJCyninrDERnsiWkhUAdGqbz15/WQdSw8Y4xf8MVkBTidA3lxOoVM4stw/sCbQY6amTmdE4N yR/VfPxmi05N5rQEmlVQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWqL-000166-A5; Fri, 18 Oct 2019 18:19:17 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWoH-0007UL-Jd for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0ACFE161B; Fri, 18 Oct 2019 11:16:54 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3E8213F718; Fri, 18 Oct 2019 11:16:53 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 6/8] arm64: Import latest version of Cortex Strings' strlen Date: Fri, 18 Oct 2019 19:16:40 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111709_772425_22836E66 X-CRM114-Status: GOOD ( 21.12 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import latest version of Cortex Strings' strlen function. The upstream source is src/aarch64/strlen.S as of commit eb80ac77a6cd in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: update attribution, expand commit message ] Signed-off-by: Robin Murphy --- arch/arm64/lib/strlen.S | 249 +++++++++++++++++++++++++++------------- 1 file changed, 169 insertions(+), 80 deletions(-) diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S index 87b0cb066915..e404edd6068c 100644 --- a/arch/arm64/lib/strlen.S +++ b/arch/arm64/lib/strlen.S @@ -1,13 +1,11 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2013-2015 Linaro Limited. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ #include @@ -23,93 +21,184 @@ */ /* Arguments and results. */ -srcin .req x0 -len .req x0 +#define srcin x0 +#define len x0 /* Locals and temporaries. */ -src .req x1 -data1 .req x2 -data2 .req x3 -data2a .req x4 -has_nul1 .req x5 -has_nul2 .req x6 -tmp1 .req x7 -tmp2 .req x8 -tmp3 .req x9 -tmp4 .req x10 -zeroones .req x11 -pos .req x12 +#define src x1 +#define data1 x2 +#define data2 x3 +#define has_nul1 x4 +#define has_nul2 x5 +#define tmp1 x4 +#define tmp2 x5 +#define tmp3 x6 +#define tmp4 x7 +#define zeroones x8 + +#define L(l) .L ## l + + /* NUL detection works on the principle that (X - 1) & (~X) & 0x80 + (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + can be done in parallel across the entire word. A faster check + (X - 1) & 0x80 is zero for non-NUL ASCII characters, but gives + false hits for characters 129..255. */ #define REP8_01 0x0101010101010101 #define REP8_7f 0x7f7f7f7f7f7f7f7f #define REP8_80 0x8080808080808080 +#ifdef TEST_PAGE_CROSS +# define MIN_PAGE_SIZE 15 +#else +# define MIN_PAGE_SIZE 4096 +#endif + + /* Since strings are short on average, we check the first 16 bytes + of the string for a NUL character. In order to do an unaligned ldp + safely we have to do a page cross check first. If there is a NUL + byte we calculate the length from the 2 8-byte words using + conditional select to reduce branch mispredictions (it is unlikely + strlen will be repeatedly called on strings with the same length). + + If the string is longer than 16 bytes, we align src so don't need + further page cross checks, and process 32 bytes per iteration + using the fast NUL check. If we encounter non-ASCII characters, + fallback to a second loop using the full NUL check. + + If the page cross check fails, we read 16 bytes from an aligned + address, remove any characters before the string, and continue + in the main loop using aligned loads. Since strings crossing a + page in the first 16 bytes are rare (probability of + 16/MIN_PAGE_SIZE ~= 0.4%), this case does not need to be optimized. + + AArch64 systems have a minimum page size of 4k. We don't bother + checking for larger page sizes - the cost of setting up the correct + page size is just not worth the extra gain from a small reduction in + the cases taking the slow path. Note that we only care about + whether the first fetch, which may be misaligned, crosses a page + boundary. */ + WEAK(strlen) - mov zeroones, #REP8_01 - bic src, srcin, #15 - ands tmp1, srcin, #15 - b.ne .Lmisaligned - /* - * NUL detection works on the principle that (X - 1) & (~X) & 0x80 - * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and - * can be done in parallel across the entire word. - */ - /* - * The inner loop deals with two Dwords at a time. This has a - * slightly higher start-up cost, but we should win quite quickly, - * especially on cores with a high number of issue slots per - * cycle, as we get much better parallelism out of the operations. - */ -.Lloop: - ldp data1, data2, [src], #16 -.Lrealigned: + and tmp1, srcin, MIN_PAGE_SIZE - 1 + mov zeroones, REP8_01 + cmp tmp1, MIN_PAGE_SIZE - 16 + b.gt L(page_cross) + ldp data1, data2, [srcin] + /* For big-endian, carry propagation (if the final byte in the + string is 0x01) means we cannot use has_nul1/2 directly. + Since we expect strings to be small and early-exit, + byte-swap the data now so has_null1/2 will be correct. */ +CPU_BE(rev data1, data1) +CPU_BE(rev data2, data2) sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f + orr tmp2, data1, REP8_7f sub tmp3, data2, zeroones - orr tmp4, data2, #REP8_7f - bic has_nul1, tmp1, tmp2 - bics has_nul2, tmp3, tmp4 - ccmp has_nul1, #0, #0, eq /* NZCV = 0000 */ - b.eq .Lloop + orr tmp4, data2, REP8_7f + bics has_nul1, tmp1, tmp2 + bic has_nul2, tmp3, tmp4 + ccmp has_nul2, 0, 0, eq + beq L(main_loop_entry) - sub len, src, srcin - cbz has_nul1, .Lnul_in_data2 -CPU_BE( mov data2, data1 ) /*prepare data to re-calculate the syndrome*/ - sub len, len, #8 - mov has_nul2, has_nul1 -.Lnul_in_data2: - /* - * For big-endian, carry propagation (if the final byte in the - * string is 0x01) means we cannot use has_nul directly. The - * easiest way to get the correct byte is to byte-swap the data - * and calculate the syndrome a second time. - */ -CPU_BE( rev data2, data2 ) -CPU_BE( sub tmp1, data2, zeroones ) -CPU_BE( orr tmp2, data2, #REP8_7f ) -CPU_BE( bic has_nul2, tmp1, tmp2 ) - - sub len, len, #8 - rev has_nul2, has_nul2 - clz pos, has_nul2 - add len, len, pos, lsr #3 /* Bits to bytes. */ + /* Enter with C = has_nul1 == 0. */ + csel has_nul1, has_nul1, has_nul2, cc + mov len, 8 + rev has_nul1, has_nul1 + clz tmp1, has_nul1 + csel len, xzr, len, cc + add len, len, tmp1, lsr 3 ret -.Lmisaligned: - cmp tmp1, #8 - neg tmp1, tmp1 - ldp data1, data2, [src], #16 - lsl tmp1, tmp1, #3 /* Bytes beyond alignment -> bits. */ - mov tmp2, #~0 - /* Big-endian. Early bytes are at MSB. */ -CPU_BE( lsl tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ - /* Little-endian. Early bytes are at LSB. */ -CPU_LE( lsr tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ + /* The inner loop processes 32 bytes per iteration and uses the fast + NUL check. If we encounter non-ASCII characters, use a second + loop with the accurate NUL check. */ + .p2align 4 +L(main_loop_entry): + bic src, srcin, 15 + sub src, src, 16 +L(main_loop): + ldp data1, data2, [src, 32]! +.Lpage_cross_entry: + sub tmp1, data1, zeroones + sub tmp3, data2, zeroones + orr tmp2, tmp1, tmp3 + tst tmp2, zeroones, lsl 7 + bne 1f + ldp data1, data2, [src, 16] + sub tmp1, data1, zeroones + sub tmp3, data2, zeroones + orr tmp2, tmp1, tmp3 + tst tmp2, zeroones, lsl 7 + beq L(main_loop) + add src, src, 16 +1: + /* The fast check failed, so do the slower, accurate NUL check. */ + orr tmp2, data1, REP8_7f + orr tmp4, data2, REP8_7f + bics has_nul1, tmp1, tmp2 + bic has_nul2, tmp3, tmp4 + ccmp has_nul2, 0, 0, eq + beq L(nonascii_loop) - orr data1, data1, tmp2 - orr data2a, data2, tmp2 - csinv data1, data1, xzr, le - csel data2, data2, data2a, le - b .Lrealigned + /* Enter with C = has_nul1 == 0. */ +L(tail): + /* For big-endian, carry propagation (if the final byte in the + string is 0x01) means we cannot use has_nul1/2 directly. The + easiest way to get the correct byte is to byte-swap the data + and calculate the syndrome a second time. */ +CPU_BE(csel data1, data1, data2, cc) +CPU_BE(rev data1, data1) +CPU_BE(sub tmp1, data1, zeroones) +CPU_BE(orr tmp2, data1, REP8_7f) +CPU_BE(bic has_nul1, tmp1, tmp2) +CPU_LE(csel has_nul1, has_nul1, has_nul2, cc) + sub len, src, srcin + rev has_nul1, has_nul1 + add tmp2, len, 8 + clz tmp1, has_nul1 + csel len, len, tmp2, cc + add len, len, tmp1, lsr 3 + ret + +L(nonascii_loop): + ldp data1, data2, [src, 16]! + sub tmp1, data1, zeroones + orr tmp2, data1, REP8_7f + sub tmp3, data2, zeroones + orr tmp4, data2, REP8_7f + bics has_nul1, tmp1, tmp2 + bic has_nul2, tmp3, tmp4 + ccmp has_nul2, 0, 0, eq + bne L(tail) + ldp data1, data2, [src, 16]! + sub tmp1, data1, zeroones + orr tmp2, data1, REP8_7f + sub tmp3, data2, zeroones + orr tmp4, data2, REP8_7f + bics has_nul1, tmp1, tmp2 + bic has_nul2, tmp3, tmp4 + ccmp has_nul2, 0, 0, eq + beq L(nonascii_loop) + b L(tail) + + /* Load 16 bytes from [srcin & ~15] and force the bytes that precede + srcin to 0x7f, so we ignore any NUL bytes before the string. + Then continue in the aligned loop. */ +L(page_cross): + bic src, srcin, 15 + ldp data1, data2, [src] + lsl tmp1, srcin, 3 + mov tmp4, -1 + /* Big-endian. Early bytes are at MSB. */ +CPU_BE(lsr tmp1, tmp4, tmp1) /* Shift (tmp1 & 63). */ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE(lsl tmp1, tmp4, tmp1) /* Shift (tmp1 & 63). */ + orr tmp1, tmp1, REP8_80 + orn data1, data1, tmp1 + orn tmp2, data2, tmp1 + tst srcin, 8 + csel data1, data1, tmp4, eq + csel data2, data2, tmp2, eq + b L(page_cross_entry) ENDPIPROC(strlen) EXPORT_SYMBOL_NOKASAN(strlen) From patchwork Fri Oct 18 18:16:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199367 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED5AD112B for ; Fri, 18 Oct 2019 18:18:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C526821835 for ; Fri, 18 Oct 2019 18:18:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="SgURhHuf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C526821835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LdH9xIKJDM5y66V2wsP/6dyUDh7vqIevwZXqVqtgrFE=; b=SgURhHufbV6uMA KWkXL60lUcrDCfuNGYPIE6WXbqzib0UtpO09NKiCC2ng9737TpQU4SS98Ek3FZQFMAVGkOMUJAdHa 3SlGNoVmNVSa+S+3T/i2ExTozwdgubfazS9sHED/5o63MjpFvLsOdxGcjbd8VYyHQxZCrKafF87SW okYt6B0OiJvJhTJx5r01Dt3sxNLnUsmNcDWnc7qdY7gkjXjrN17Y2PA/eO6AK+EwcZHeYv+2W1YkE hDu7xEMyBjnrHyMM1IaeHMy6ktq6ShBjhnTarAOgQber1T7gUEuxcmfBndTnSljmPrt1+biqScMt8 wSxMQLkveDYL7VMggxfQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWpU-0000Kc-6M; Fri, 18 Oct 2019 18:18:24 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWoH-0007UR-KP for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C6CE1650; Fri, 18 Oct 2019 11:16:55 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3FE803F718; Fri, 18 Oct 2019 11:16:54 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 7/8] arm64: Import latest version of Cortex Strings' strncmp Date: Fri, 18 Oct 2019 19:16:41 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111710_127065_A930488A X-CRM114-Status: GOOD ( 22.88 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sam Tebbs , sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Sam Tebbs Import latest version of Cortex Strings' strncmp function. The upstream source is src/aarch64/strncmp.S as of commit 071fe283b28d in https://git.linaro.org/toolchain/cortex-strings.git. Signed-off-by: Sam Tebbs [ rm: update attribution, expand commit message ] Signed-off-by: Robin Murphy --- arch/arm64/lib/strncmp.S | 379 ++++++++++++++++++--------------------- 1 file changed, 171 insertions(+), 208 deletions(-) diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S index f571581888fa..bb6d16c1fa75 100644 --- a/arch/arm64/lib/strncmp.S +++ b/arch/arm64/lib/strncmp.S @@ -1,13 +1,11 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2013 ARM Ltd. - * Copyright (C) 2013 Linaro. + * Copyright (c) 2013,2018 Linaro Limited. All rights reserved. * - * This code is based on glibc cortex strings work originally authored by Linaro - * be found @ + * This code is based on glibc Cortex Strings work originally authored by + * Linaro, found at: * - * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ - * files/head:/src/aarch64/ + * https://git.linaro.org/toolchain/cortex-strings.git */ #include @@ -30,49 +28,49 @@ #define REP8_80 0x8080808080808080 /* Parameters and result. */ -src1 .req x0 -src2 .req x1 -limit .req x2 -result .req x0 +#define src1 x0 +#define src2 x1 +#define limit x2 +#define result x0 /* Internal variables. */ -data1 .req x3 -data1w .req w3 -data2 .req x4 -data2w .req w4 -has_nul .req x5 -diff .req x6 -syndrome .req x7 -tmp1 .req x8 -tmp2 .req x9 -tmp3 .req x10 -zeroones .req x11 -pos .req x12 -limit_wd .req x13 -mask .req x14 -endloop .req x15 +#define data1 x3 +#define data1w w3 +#define data2 x4 +#define data2w w4 +#define has_nul x5 +#define diff x6 +#define syndrome x7 +#define tmp1 x8 +#define tmp2 x9 +#define tmp3 x10 +#define zeroones x11 +#define pos x12 +#define limit_wd x13 +#define mask x14 +#define endloop x15 +#define count mask + .p2align 6 + .rep 7 + nop /* Pad so that the loop below fits a cache line. */ + .endr WEAK(strncmp) cbz limit, .Lret0 eor tmp1, src1, src2 mov zeroones, #REP8_01 tst tmp1, #7 + and count, src1, #7 b.ne .Lmisaligned8 - ands tmp1, src1, #7 - b.ne .Lmutual_align + cbnz count, .Lmutual_align /* Calculate the number of full and partial words -1. */ - /* - * when limit is mulitply of 8, if not sub 1, - * the judgement of last dword will wrong. - */ - sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ - lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */ + sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ + lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */ - /* - * NUL detection works on the principle that (X - 1) & (~X) & 0x80 - * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and - * can be done in parallel across the entire word. - */ + /* NUL detection works on the principle that (X - 1) & (~X) & 0x80 + (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + can be done in parallel across the entire word. */ + /* Start of performance-critical section -- one 64B cache line. */ .Lloop_aligned: ldr data1, [src1], #8 ldr data2, [src2], #8 @@ -80,23 +78,24 @@ WEAK(strncmp) subs limit_wd, limit_wd, #1 sub tmp1, data1, zeroones orr tmp2, data1, #REP8_7f - eor diff, data1, data2 /* Non-zero if differences found. */ - csinv endloop, diff, xzr, pl /* Last Dword or differences.*/ - bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, pl /* Last Dword or differences. */ + bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ ccmp endloop, #0, #0, eq b.eq .Lloop_aligned + /* End of performance-critical section -- one 64B cache line. */ - /*Not reached the limit, must have found the end or a diff. */ + /* Not reached the limit, must have found the end or a diff. */ tbz limit_wd, #63, .Lnot_limit /* Limit % 8 == 0 => all bytes significant. */ ands limit, limit, #7 b.eq .Lnot_limit - lsl limit, limit, #3 /* Bits -> bytes. */ + lsl limit, limit, #3 /* Bits -> bytes. */ mov mask, #~0 -CPU_BE( lsr mask, mask, limit ) -CPU_LE( lsl mask, mask, limit ) +CPU_BE(lsr mask, mask, limit) +CPU_LE(lsl mask, mask, limit) bic data1, data1, mask bic data2, data2, mask @@ -105,192 +104,156 @@ CPU_LE( lsl mask, mask, limit ) .Lnot_limit: orr syndrome, diff, has_nul - b .Lcal_cmpresult + + CPU_LE(rev syndrome, syndrome) + CPU_LE(rev data1, data1) + /* The MS-non-zero bit of the syndrome marks either the first bit + that is different, or the top bit of the first zero byte. + Shifting left now will bring the critical information into the + top bits. */ + CPU_LE(clz pos, syndrome) + CPU_LE(rev data2, data2) + CPU_LE(lsl data1, data1, pos) + CPU_LE(lsl data2, data2, pos) + /* But we need to zero-extend (char is unsigned) the value and then + perform a signed 32-bit subtraction. */ + CPU_LE(lsr data1, data1, #56) + CPU_LE(sub result, data1, data2, lsr #56) + CPU_LE(ret) + /* For big-endian we cannot use the trick with the syndrome value + as carry-propagation can corrupt the upper bits if the trailing + bytes in the string contain 0x01. */ + /* However, if there is no NUL byte in the dword, we can generate + the result directly. We can't just subtract the bytes as the + MSB might be significant. */ + CPU_BE(cbnz has_nul, 1f) + CPU_BE(cmp data1, data2) + CPU_BE(cset result, ne) + CPU_BE(cneg result, result, lo) + CPU_BE(ret) +1: + /* Re-compute the NUL-byte detection, using a byte-reversed value. */ + CPU_BE(rev tmp3, data1) + CPU_BE(sub tmp1, tmp3, zeroones) + CPU_BE(orr tmp2, tmp3, #REP8_7f) + CPU_BE(bic has_nul, tmp1, tmp2) + CPU_BE(rev has_nul, has_nul) + CPU_BE(orr syndrome, diff, has_nul) + CPU_BE(clz pos, syndrome) + /* The MS-non-zero bit of the syndrome marks either the first bit + that is different, or the top bit of the first zero byte. + Shifting left now will bring the critical information into the + top bits. */ + CPU_BE(lsl data1, data1, pos) + CPU_BE(lsl data2, data2, pos) + /* But we need to zero-extend (char is unsigned) the value and then + perform a signed 32-bit subtraction. */ + CPU_BE(lsr data1, data1, #56) + CPU_BE(sub result, data1, data2, lsr #56) + CPU_BE(ret) .Lmutual_align: - /* - * Sources are mutually aligned, but are not currently at an - * alignment boundary. Round down the addresses and then mask off - * the bytes that precede the start point. - * We also need to adjust the limit calculations, but without - * overflowing if the limit is near ULONG_MAX. - */ + /* Sources are mutually aligned, but are not currently at an + alignment boundary. Round down the addresses and then mask off + the bytes that precede the start point. + We also need to adjust the limit calculations, but without + overflowing if the limit is near ULONG_MAX. */ bic src1, src1, #7 bic src2, src2, #7 ldr data1, [src1], #8 - neg tmp3, tmp1, lsl #3 /* 64 - bits(bytes beyond align). */ + neg tmp3, count, lsl #3 /* 64 - bits(bytes beyond align). */ ldr data2, [src2], #8 mov tmp2, #~0 - sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ + sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ /* Big-endian. Early bytes are at MSB. */ -CPU_BE( lsl tmp2, tmp2, tmp3 ) /* Shift (tmp1 & 63). */ + CPU_BE(lsl tmp2, tmp2, tmp3) /* Shift (count & 63). */ /* Little-endian. Early bytes are at LSB. */ -CPU_LE( lsr tmp2, tmp2, tmp3 ) /* Shift (tmp1 & 63). */ - + CPU_LE(lsr tmp2, tmp2, tmp3) /* Shift (count & 63). */ and tmp3, limit_wd, #7 lsr limit_wd, limit_wd, #3 - /* Adjust the limit. Only low 3 bits used, so overflow irrelevant.*/ - add limit, limit, tmp1 - add tmp3, tmp3, tmp1 + /* Adjust the limit. Only low 3 bits used, so overflow irrelevant. */ + add limit, limit, count + add tmp3, tmp3, count orr data1, data1, tmp2 orr data2, data2, tmp2 add limit_wd, limit_wd, tmp3, lsr #3 b .Lstart_realigned -/*when src1 offset is not equal to src2 offset...*/ + .p2align 6 + /* Don't bother with dwords for up to 16 bytes. */ .Lmisaligned8: - cmp limit, #8 - b.lo .Ltiny8proc /*limit < 8... */ - /* - * Get the align offset length to compare per byte first. - * After this process, one string's address will be aligned.*/ - and tmp1, src1, #7 - neg tmp1, tmp1 - add tmp1, tmp1, #8 - and tmp2, src2, #7 - neg tmp2, tmp2 - add tmp2, tmp2, #8 - subs tmp3, tmp1, tmp2 - csel pos, tmp1, tmp2, hi /*Choose the maximum. */ - /* - * Here, limit is not less than 8, so directly run .Ltinycmp - * without checking the limit.*/ - sub limit, limit, pos -.Ltinycmp: - ldrb data1w, [src1], #1 - ldrb data2w, [src2], #1 - subs pos, pos, #1 - ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ - ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ - b.eq .Ltinycmp - cbnz pos, 1f /*find the null or unequal...*/ - cmp data1w, #1 - ccmp data1w, data2w, #0, cs - b.eq .Lstart_align /*the last bytes are equal....*/ -1: - sub result, data1, data2 - ret + cmp limit, #16 + b.hs .Ltry_misaligned_words -.Lstart_align: - lsr limit_wd, limit, #3 - cbz limit_wd, .Lremain8 - /*process more leading bytes to make str1 aligned...*/ - ands xzr, src1, #7 - b.eq .Lrecal_offset - add src1, src1, tmp3 /*tmp3 is positive in this branch.*/ - add src2, src2, tmp3 - ldr data1, [src1], #8 - ldr data2, [src2], #8 - - sub limit, limit, tmp3 - lsr limit_wd, limit, #3 - subs limit_wd, limit_wd, #1 - - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - eor diff, data1, data2 /* Non-zero if differences found. */ - csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ - bics has_nul, tmp1, tmp2 - ccmp endloop, #0, #0, eq /*has_null is ZERO: no null byte*/ - b.ne .Lunequal_proc - /*How far is the current str2 from the alignment boundary...*/ - and tmp3, tmp3, #7 -.Lrecal_offset: - neg pos, tmp3 -.Lloopcmp_proc: - /* - * Divide the eight bytes into two parts. First,backwards the src2 - * to an alignment boundary,load eight bytes from the SRC2 alignment - * boundary,then compare with the relative bytes from SRC1. - * If all 8 bytes are equal,then start the second part's comparison. - * Otherwise finish the comparison. - * This special handle can garantee all the accesses are in the - * thread/task space in avoid to overrange access. - */ - ldr data1, [src1,pos] - ldr data2, [src2,pos] - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ - eor diff, data1, data2 /* Non-zero if differences found. */ - csinv endloop, diff, xzr, eq - cbnz endloop, .Lunequal_proc - - /*The second part process*/ - ldr data1, [src1], #8 - ldr data2, [src2], #8 - subs limit_wd, limit_wd, #1 - sub tmp1, data1, zeroones - orr tmp2, data1, #REP8_7f - eor diff, data1, data2 /* Non-zero if differences found. */ - csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ - bics has_nul, tmp1, tmp2 - ccmp endloop, #0, #0, eq /*has_null is ZERO: no null byte*/ - b.eq .Lloopcmp_proc - -.Lunequal_proc: - orr syndrome, diff, has_nul - cbz syndrome, .Lremain8 -.Lcal_cmpresult: - /* - * reversed the byte-order as big-endian,then CLZ can find the most - * significant zero bits. - */ -CPU_LE( rev syndrome, syndrome ) -CPU_LE( rev data1, data1 ) -CPU_LE( rev data2, data2 ) - /* - * For big-endian we cannot use the trick with the syndrome value - * as carry-propagation can corrupt the upper bits if the trailing - * bytes in the string contain 0x01. - * However, if there is no NUL byte in the dword, we can generate - * the result directly. We can't just subtract the bytes as the - * MSB might be significant. - */ -CPU_BE( cbnz has_nul, 1f ) -CPU_BE( cmp data1, data2 ) -CPU_BE( cset result, ne ) -CPU_BE( cneg result, result, lo ) -CPU_BE( ret ) -CPU_BE( 1: ) - /* Re-compute the NUL-byte detection, using a byte-reversed value.*/ -CPU_BE( rev tmp3, data1 ) -CPU_BE( sub tmp1, tmp3, zeroones ) -CPU_BE( orr tmp2, tmp3, #REP8_7f ) -CPU_BE( bic has_nul, tmp1, tmp2 ) -CPU_BE( rev has_nul, has_nul ) -CPU_BE( orr syndrome, diff, has_nul ) - /* - * The MS-non-zero bit of the syndrome marks either the first bit - * that is different, or the top bit of the first zero byte. - * Shifting left now will bring the critical information into the - * top bits. - */ - clz pos, syndrome - lsl data1, data1, pos - lsl data2, data2, pos - /* - * But we need to zero-extend (char is unsigned) the value and then - * perform a signed 32-bit subtraction. - */ - lsr data1, data1, #56 - sub result, data1, data2, lsr #56 - ret - -.Lremain8: - /* Limit % 8 == 0 => all bytes significant. */ - ands limit, limit, #7 - b.eq .Lret0 -.Ltiny8proc: +.Lbyte_loop: + /* Perhaps we can do better than this. */ ldrb data1w, [src1], #1 ldrb data2w, [src2], #1 subs limit, limit, #1 - - ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ - ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ - b.eq .Ltiny8proc + ccmp data1w, #1, #0, hi /* NZCV = 0b0000. */ + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.eq .Lbyte_loop +.Ldone: sub result, data1, data2 ret + /* Align the SRC1 to a dword by doing a bytewise compare and then do + the dword loop. */ +.Ltry_misaligned_words: + lsr limit_wd, limit, #3 + cbz count, .Ldo_misaligned + + neg count, count + and count, count, #7 + sub limit, limit, count + lsr limit_wd, limit, #3 + +.Lpage_end_loop: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + cmp data1w, #1 + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.ne .Ldone + subs count, count, #1 + b.hi .Lpage_end_loop + +.Ldo_misaligned: + /* Prepare ourselves for the next page crossing. Unlike the aligned + loop, we fetch 1 less dword because we risk crossing bounds on + SRC2. */ + mov count, #8 + subs limit_wd, limit_wd, #1 + b.lo .Ldone_loop +.Lloop_misaligned: + and tmp2, src2, #0xff8 + eor tmp2, tmp2, #0xff8 + cbz tmp2, .Lpage_end_loop + + ldr data1, [src1], #8 + ldr data2, [src2], #8 + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + ccmp diff, #0, #0, eq + b.ne .Lnot_limit + subs limit_wd, limit_wd, #1 + b.pl .Lloop_misaligned + +.Ldone_loop: + /* We found a difference or a NULL before the limit was reached. */ + and limit, limit, #7 + cbz limit, .Lnot_limit + /* Read the last word. */ + sub src1, src1, 8 + sub src2, src2, 8 + ldr data1, [src1, limit] + ldr data2, [src2, limit] + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + ccmp diff, #0, #0, eq + b.ne .Lnot_limit .Lret0: mov result, #0 From patchwork Fri Oct 18 18:16:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11199375 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB00514ED for ; Fri, 18 Oct 2019 18:19:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABE9A20820 for ; Fri, 18 Oct 2019 18:19:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qN3biWkB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABE9A20820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rp0fY18/xtVcxQdAUxGQZMrACSD1JYDx5aC8RvHRNTs=; b=qN3biWkB8+zJpW p9qVM6VriCx+0KKxpHqPP5abSVBfJJR3ZaTNWsmEnqCF9awPPc8Qkoy5ddzWLEsZl4akdey+G+9tO LOEiSsVDn3Dyb2y/iWXGDVGtgOaVHyHrTOCKdy7ABuMxWCOuNJ1eueIyezfVAbSCmDLGCMzC9AcgC FhiD3mH31Gkyiiz2czvZczuLUqKj1KAO1AEE2Wrf1fjqAi+NYuMHPT3+0oziIXzDIhgqQZbEcllja 9qLQrsn17U+fun42A3w/CGbfH5kmg5QicUH/k5oYZy5kj46iWn/nRWS1GNs/mIbjFu85ogn0iDeea +j2+LuyxfZ+9786Ur1uQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWq7-0000tK-3T; Fri, 18 Oct 2019 18:19:03 +0000 Received: from [217.140.110.172] (helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iLWoH-0007UT-Jd for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2019 18:17:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6AA61655; Fri, 18 Oct 2019 11:16:55 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 40D3B3F718; Fri, 18 Oct 2019 11:16:55 -0700 (PDT) From: Robin Murphy To: will@kernel.org, catalin.marinas@arm.com Subject: [PATCH v2 8/8] arm64: Tidy up _asm_extable_faultaddr usage Date: Fri, 18 Oct 2019 19:16:42 +0100 Message-Id: <2dfad62454b9a1c7c230d2743fbc0ca01f1574a9.1571421836.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191018_111709_792352_017FD59F X-CRM114-Status: GOOD ( 11.82 ) X-Spam-Score: 1.3 (+) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (1.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sam-tebbs@arm.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org To match the way the USER() shorthand wraps _asm_extable entries, introduce USER_F() to wrap _asm_extable_faultaddr and clean up a bit. Signed-off-by: Robin Murphy --- arch/arm64/include/asm/assembler.h | 4 ++ arch/arm64/lib/copy_from_user.S | 36 +++++---------- arch/arm64/lib/copy_in_user.S | 72 ++++++++++-------------------- arch/arm64/lib/copy_to_user.S | 33 +++++--------- 4 files changed, 51 insertions(+), 94 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 02bb156cbf0e..959f6fa493ef 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -155,6 +155,10 @@ alternative_endif 9999: x; \ _asm_extable 9999b, l +#define USER_F(l, x...) \ +9999: x; \ + _asm_extable_faultaddr 9999b, l + /* * Register aliases. */ diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 8928c38d8c76..d0bcfad5dafd 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -21,8 +21,7 @@ */ .macro ldrb1 reg, ptr, offset=0 - 8888: ldtrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldtrb \reg, [\ptr, \offset]) .endm .macro strb1 reg, ptr, offset=0 @@ -31,9 +30,8 @@ .macro ldrb1_reg reg, ptr, offset add \ptr, \ptr, \offset - 8888: ldtrb \reg, [\ptr] + USER_F(9998f, ldtrb \reg, [\ptr]) sub \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; .endm .macro strb1_reg reg, ptr, offset @@ -41,8 +39,7 @@ .endm .macro ldr1 reg, ptr, offset=0 - 8888: ldtr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldtr \reg, [\ptr, \offset]) .endm .macro str1 reg, ptr, offset=0 @@ -50,10 +47,8 @@ .endm .macro ldp1 regA, regB, ptr, offset=0 - 8888: ldtr \regA, [\ptr, \offset] - 8889: ldtr \regB, [\ptr, \offset + 8] - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; + USER_F(9998f, ldtr \regA, [\ptr, \offset]) + USER_F(9998f, ldtr \regB, [\ptr, \offset + 8]) .endm .macro stp1 regA, regB, ptr, offset=0 @@ -61,11 +56,9 @@ .endm .macro ldp1_pre regA, regB, ptr, offset - 8888: ldtr \regA, [\ptr, \offset] - 8889: ldtr \regB, [\ptr, \offset + 8] + USER_F(9998f, ldtr \regA, [\ptr, \offset]) + USER_F(9998f, ldtr \regB, [\ptr, \offset + 8]) add \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; .endm .macro stp1_pre regA, regB, ptr, offset @@ -73,8 +66,7 @@ .endm .macro ldrb1_nuao reg, ptr, offset=0 - 8888: ldrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldrb \reg, [\ptr, \offset]) .endm .macro strb1_nuao reg, ptr, offset=0 @@ -82,8 +74,7 @@ .endm .macro ldrb1_nuao_reg reg, ptr, offset=0 - 8888: ldrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldrb \reg, [\ptr, \offset]) .endm .macro strb1_nuao_reg reg, ptr, offset=0 @@ -91,8 +82,7 @@ .endm .macro ldr1_nuao reg, ptr, offset=0 - 8888: ldr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldr \reg, [\ptr, \offset]) .endm .macro str1_nuao reg, ptr, offset=0 @@ -100,8 +90,7 @@ .endm .macro ldp1_nuao regA, regB, ptr, offset=0 - 8888: ldp \regA, \regB, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldp \regA, \regB, [\ptr, \offset]) .endm .macro stp1_nuao regA, regB, ptr, offset=0 @@ -109,8 +98,7 @@ .endm .macro ldp1_pre_nuao regA, regB, ptr, offset - 8888: ldp \regA, \regB, [\ptr, \offset]! - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldp \regA, \regB, [\ptr, \offset]!) .endm .macro stp1_pre_nuao regA, regB, ptr, offset diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S index 2aecdc300c8d..25fdedc6eed8 100644 --- a/arch/arm64/lib/copy_in_user.S +++ b/arch/arm64/lib/copy_in_user.S @@ -23,117 +23,93 @@ */ .macro ldrb1 reg, ptr, offset=0 - 8888: ldtrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldtrb \reg, [\ptr, \offset]) .endm .macro strb1 reg, ptr, offset=0 - 8888: sttrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, sttrb \reg, [\ptr, \offset]) .endm .macro ldrb1_reg reg, ptr, offset add \ptr, \ptr, \offset - 8888: ldtrb \reg, [\ptr] + USER_F(9998f, ldtrb \reg, [\ptr]) sub \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; .endm .macro strb1_reg reg, ptr, offset add \ptr, \ptr, \offset - 8888: sttrb \reg, [\ptr] + USER_F(9998f, sttrb \reg, [\ptr]) sub \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; .endm .macro ldr1 reg, ptr, offset=0 - 8888: ldtr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldtr \reg, [\ptr, \offset]) .endm .macro str1 reg, ptr, offset=0 - 8888: sttr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, sttr \reg, [\ptr, \offset]) .endm .macro ldp1 regA, regB, ptr, offset=0 - 8888: ldtr \regA, [\ptr, \offset] - 8889: ldtr \regB, [\ptr, \offset + 8] - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; + USER_F(9998f, ldtr \regA, [\ptr, \offset]) + USER_F(9998f, ldtr \regB, [\ptr, \offset + 8]) .endm .macro stp1 regA, regB, ptr, offset=0 - 8888: sttr \regA, [\ptr, \offset] - 8889: sttr \regB, [\ptr, \offset + 8] - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; + USER_F(9998f, sttr \regA, [\ptr, \offset]) + USER_F(9998f, sttr \regB, [\ptr, \offset + 8]) .endm .macro ldp1_pre regA, regB, ptr, offset - 8888: ldtr \regA, [\ptr, \offset] - 8889: ldtr \regB, [\ptr, \offset + 8] + USER_F(9998f, ldtr \regA, [\ptr, \offset]) + USER_F(9998f, ldtr \regB, [\ptr, \offset + 8]) add \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; .endm .macro stp1_pre regA, regB, ptr, offset - 8888: sttr \regA, [\ptr, \offset] - 8889: sttr \regB, [\ptr, \offset + 8] + USER_F(9998f, sttr \regA, [\ptr, \offset]) + USER_F(9998f, sttr \regB, [\ptr, \offset + 8]) add \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; .endm .macro ldrb1_nuao reg, ptr, offset=0 - 8888: ldrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldrb \reg, [\ptr, \offset]) .endm .macro strb1_nuao reg, ptr, offset=0 - 8888: strb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, strb \reg, [\ptr, \offset]) .endm .macro ldrb1_nuao_reg reg, ptr, offset=0 - 8888: ldrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldrb \reg, [\ptr, \offset]) .endm .macro strb1_nuao_reg reg, ptr, offset=0 - 8888: strb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, strb \reg, [\ptr, \offset]) .endm .macro ldr1_nuao reg, ptr, offset=0 - 8888: ldr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldr \reg, [\ptr, \offset]) .endm .macro str1_nuao reg, ptr, offset=0 - 8888: str \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, str \reg, [\ptr, \offset]) .endm .macro ldp1_nuao regA, regB, ptr, offset=0 - 8888: ldp \regA, \regB, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldp \regA, \regB, [\ptr, \offset]) .endm .macro stp1_nuao regA, regB, ptr, offset=0 - 8888: stp \regA, \regB, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, stp \regA, \regB, [\ptr, \offset]) .endm .macro ldp1_pre_nuao regA, regB, ptr, offset - 8888: ldp \regA, \regB, [\ptr, \offset]! - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, ldp \regA, \regB, [\ptr, \offset]!) .endm .macro stp1_pre_nuao regA, regB, ptr, offset - 8888: stp \regA, \regB, [\ptr, \offset]! - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, stp \regA, \regB, [\ptr, \offset]!) .endm .macro copy_exit diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index d49db097bc31..744c57460619 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -25,8 +25,7 @@ .endm .macro strb1 reg, ptr, offset=0 - 8888: sttrb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, sttrb \reg, [\ptr, \offset]) .endm .macro ldrb1_reg reg, ptr, offset @@ -35,9 +34,8 @@ .macro strb1_reg reg, ptr, offset add \ptr, \ptr, \offset - 8888: sttrb \reg, [\ptr] + USER_F(9998f, sttrb \reg, [\ptr]) sub \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; .endm .macro ldr1 reg, ptr, offset=0 @@ -45,8 +43,7 @@ .endm .macro str1 reg, ptr, offset=0 - 8888: sttr \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, sttr \reg, [\ptr, \offset]) .endm .macro ldp1 regA, regB, ptr, offset=0 @@ -54,10 +51,8 @@ .endm .macro stp1 regA, regB, ptr, offset=0 - 8888: sttr \regA, [\ptr, \offset] - 8889: sttr \regB, [\ptr, \offset + 8] - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; + USER_F(9998f, sttr \regA, [\ptr, \offset]) + USER_F(9998f, sttr \regB, [\ptr, \offset + 8]) .endm .macro ldp1_pre regA, regB, ptr, offset @@ -65,11 +60,9 @@ .endm .macro stp1_pre regA, regB, ptr, offset - 8888: sttr \regA, [\ptr, \offset] - 8889: sttr \regB, [\ptr, \offset + 8] + USER_F(9998f, sttr \regA, [\ptr, \offset]) + USER_F(9998f, sttr \regB, [\ptr, \offset + 8]) add \ptr, \ptr, \offset - _asm_extable_faultaddr 8888b,9998f; - _asm_extable_faultaddr 8889b,9998f; .endm .macro ldrb1_nuao reg, ptr, offset=0 @@ -77,8 +70,7 @@ .endm .macro strb1_nuao reg, ptr, offset=0 - 8888: strb \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, strb \reg, [\ptr, \offset]) .endm .macro ldrb1_nuao_reg reg, ptr, offset=0 @@ -94,8 +86,7 @@ .endm .macro str1_nuao reg, ptr, offset=0 - 8888: str \reg, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, str \reg, [\ptr, \offset]) .endm .macro ldp1_nuao regA, regB, ptr, offset=0 @@ -107,13 +98,11 @@ .endm .macro stp1_nuao regA, regB, ptr, offset=0 - 8888: stp \regA, \regB, [\ptr, \offset] - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, stp \regA, \regB, [\ptr, \offset]) .endm .macro stp1_pre_nuao regA, regB, ptr, offset - 8888: stp \regA, \regB, [\ptr, \offset]! - _asm_extable_faultaddr 8888b,9998f; + USER_F(9998f, stp \regA, \regB, [\ptr, \offset]!) .endm .macro copy_exit