From patchwork Sun Sep 19 19:21:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12504409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6643CC433EF for ; Sun, 19 Sep 2021 19:22:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 064F061004 for ; Sun, 19 Sep 2021 19:22:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 064F061004 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yBRxoAb7OV1nGSOQ7NyN8iBADuQnHINtmlLK+qvWs/w=; b=arwRB6Bw4zwaS5 UMeMrdV5WZJCr+RBkTmSLjmpA2ctcdkXURBRJxSQA0FvDM1HR9vZ4e4PBJRTOMvv7DBtsHXz9+dbS Xj7aCfBXuGSerCMBZWvzvDna0m4P1hXkoo3HDm3Rx2oRT/A5DQF7Upn1E21v/BnGoiyXxcx5Dfyv2 urMJDjS9c2ZWSDGdbJBnUm2Te2JyLgmLv+1Bar4iO9lm3g40D/7pdiVnTazZX6xqLxt7kdejLhV7U ydz/f0xc+QACamKtF+bAR1yCZCguIJLwzto7PruQWsA22AiLFA8RSFHr5O9HNZK18qi0KA7bwRFad mweux85s/E8XMvCbir8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Nj-0005vl-Aq; Sun, 19 Sep 2021 19:21:43 +0000 Received: from mail-ed1-f42.google.com ([209.85.208.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Ne-0005uA-I7 for linux-riscv@lists.infradead.org; Sun, 19 Sep 2021 19:21:40 +0000 Received: by mail-ed1-f42.google.com with SMTP id v22so47813361edd.11 for ; Sun, 19 Sep 2021 12:21:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z/fjsp9YqZ9w10G0/lCqeb7NhPPLI/RdtJEwB09TTAk=; b=LWme9N+VgeSyHX793X3xItPtCUHIa/PE8VRC+UIJ+RVq4tfSkn6iFLwUarZ1ZVDqq2 h+bJQrurQiKffiar5Q5TrUbzWquK2gIbTx5FHchlAWOdQqLEEmSVhUdDzrU3KJRgQ4av 4eqwtO4yDKZww0ZuJB3tw3N6plPLUgwRcQJ+gai97nbHlVh8XQHQuBq8sECsXkPUUueD a+s67lsSa2X985IvVvX8qL0RSjLZBVYr2ozu+jKMsyLfbuj5xkUdBppyB3s6UWDoKgCb 3U8+OLrFY0YRkCHgvhcubF23YHAf7h37iFSd30RX8FQqwUYef3uOLCy90Y3hHlHlIHlu iHOg== X-Gm-Message-State: AOAM5318BzmKhCPJoVIVxWEToHw+sJI1XQKUOmfbW3gfWOu9n+Zkb5Mr avPWvQnkAuMVcCp5VwrkydTMPYiCL5X87Q== X-Google-Smtp-Source: ABdhPJzYt2Fo0EfKhdlR6mPfXS+qAN4ItHUC3SNMMKejJ63uDTfQBd2mAoOJB+585fA9m+Bl+fCyaA== X-Received: by 2002:a50:cfc2:: with SMTP id i2mr20300768edk.72.1632079297015; Sun, 19 Sep 2021 12:21:37 -0700 (PDT) Received: from turbo.teknoraver.net (net-2-34-36-22.cust.vodafonedsl.it. [2.34.36.22]) by smtp.gmail.com with ESMTPSA id 10sm5208525eju.12.2021.09.19.12.21.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Sep 2021 12:21:36 -0700 (PDT) From: Matteo Croce To: linux-riscv@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Emil Renner Berthing , Akira Tsukamoto , Drew Fustini , Bin Meng , David Laight , Guo Ren , Christoph Hellwig Subject: [PATCH v4 1/3] riscv: optimized memcpy Date: Sun, 19 Sep 2021 21:21:02 +0200 Message-Id: <20210919192104.98592-2-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210919192104.98592-1-mcroce@linux.microsoft.com> References: <20210919192104.98592-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210919_122138_655508_8CA7EE10 X-CRM114-Status: GOOD ( 28.55 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Matteo Croce Write a C version of memcpy() which uses the biggest data size allowed, without generating unaligned accesses. The procedure is made of three steps: First copy data one byte at time until the destination buffer is aligned to a long boundary. Then copy the data one long at time shifting the current and the next u8 to compose a long at every cycle. Finally, copy the remainder one byte at time. On a BeagleV, the TCP RX throughput increased by 45%: before: $ iperf3 -c beaglev Connecting to host beaglev, port 5201 [ 5] local 192.168.85.6 port 44840 connected to 192.168.85.48 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 76.4 MBytes 641 Mbits/sec 27 624 KBytes [ 5] 1.00-2.00 sec 72.5 MBytes 608 Mbits/sec 0 708 KBytes [ 5] 2.00-3.00 sec 73.8 MBytes 619 Mbits/sec 10 451 KBytes [ 5] 3.00-4.00 sec 72.5 MBytes 608 Mbits/sec 0 564 KBytes [ 5] 4.00-5.00 sec 73.8 MBytes 619 Mbits/sec 0 658 KBytes [ 5] 5.00-6.00 sec 73.8 MBytes 619 Mbits/sec 14 522 KBytes [ 5] 6.00-7.00 sec 73.8 MBytes 619 Mbits/sec 0 621 KBytes [ 5] 7.00-8.00 sec 72.5 MBytes 608 Mbits/sec 0 706 KBytes [ 5] 8.00-9.00 sec 73.8 MBytes 619 Mbits/sec 20 580 KBytes [ 5] 9.00-10.00 sec 73.8 MBytes 619 Mbits/sec 0 672 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 736 MBytes 618 Mbits/sec 71 sender [ 5] 0.00-10.01 sec 733 MBytes 615 Mbits/sec receiver after: $ iperf3 -c beaglev Connecting to host beaglev, port 5201 [ 5] local 192.168.85.6 port 44864 connected to 192.168.85.48 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 109 MBytes 912 Mbits/sec 48 559 KBytes [ 5] 1.00-2.00 sec 108 MBytes 902 Mbits/sec 0 690 KBytes [ 5] 2.00-3.00 sec 106 MBytes 891 Mbits/sec 36 396 KBytes [ 5] 3.00-4.00 sec 108 MBytes 902 Mbits/sec 0 567 KBytes [ 5] 4.00-5.00 sec 106 MBytes 891 Mbits/sec 0 699 KBytes [ 5] 5.00-6.00 sec 106 MBytes 891 Mbits/sec 32 414 KBytes [ 5] 6.00-7.00 sec 106 MBytes 891 Mbits/sec 0 583 KBytes [ 5] 7.00-8.00 sec 106 MBytes 891 Mbits/sec 0 708 KBytes [ 5] 8.00-9.00 sec 106 MBytes 891 Mbits/sec 28 433 KBytes [ 5] 9.00-10.00 sec 108 MBytes 902 Mbits/sec 0 591 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.04 GBytes 897 Mbits/sec 144 sender [ 5] 0.00-10.01 sec 1.04 GBytes 894 Mbits/sec receiver And the decreased CPU time of the memcpy() is observable with perf top. This is the `perf top -Ue task-clock` output when doing the test: before: Overhead Shared O Symbol 42.22% [kernel] [k] memcpy 35.00% [kernel] [k] __asm_copy_to_user 3.50% [kernel] [k] sifive_l2_flush64_range 2.30% [kernel] [k] stmmac_napi_poll_rx 1.11% [kernel] [k] memset after: Overhead Shared O Symbol 45.69% [kernel] [k] __asm_copy_to_user 29.06% [kernel] [k] memcpy 4.09% [kernel] [k] sifive_l2_flush64_range 2.77% [kernel] [k] stmmac_napi_poll_rx 1.24% [kernel] [k] memset Signed-off-by: Matteo Croce Reported-by: kernel test robot --- arch/riscv/include/asm/string.h | 8 ++- arch/riscv/kernel/riscv_ksyms.c | 2 - arch/riscv/lib/Makefile | 2 +- arch/riscv/lib/memcpy.S | 108 -------------------------------- arch/riscv/lib/string.c | 90 ++++++++++++++++++++++++++ 5 files changed, 97 insertions(+), 113 deletions(-) delete mode 100644 arch/riscv/lib/memcpy.S create mode 100644 arch/riscv/lib/string.c diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h index 909049366555..6b5d6fc3eab4 100644 --- a/arch/riscv/include/asm/string.h +++ b/arch/riscv/include/asm/string.h @@ -12,9 +12,13 @@ #define __HAVE_ARCH_MEMSET extern asmlinkage void *memset(void *, int, size_t); extern asmlinkage void *__memset(void *, int, size_t); + +#ifdef CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE #define __HAVE_ARCH_MEMCPY -extern asmlinkage void *memcpy(void *, const void *, size_t); -extern asmlinkage void *__memcpy(void *, const void *, size_t); +extern void *memcpy(void *dest, const void *src, size_t count); +extern void *__memcpy(void *dest, const void *src, size_t count); +#endif + #define __HAVE_ARCH_MEMMOVE extern asmlinkage void *memmove(void *, const void *, size_t); extern asmlinkage void *__memmove(void *, const void *, size_t); diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c index 5ab1c7e1a6ed..3f6d512a5b97 100644 --- a/arch/riscv/kernel/riscv_ksyms.c +++ b/arch/riscv/kernel/riscv_ksyms.c @@ -10,8 +10,6 @@ * Assembly functions that may be used (directly or indirectly) by modules */ EXPORT_SYMBOL(memset); -EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(memmove); EXPORT_SYMBOL(__memset); -EXPORT_SYMBOL(__memcpy); EXPORT_SYMBOL(__memmove); diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 25d5c9664e57..2ffe85d4baee 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -1,9 +1,9 @@ # SPDX-License-Identifier: GPL-2.0-only lib-y += delay.o -lib-y += memcpy.o lib-y += memset.o lib-y += memmove.o lib-$(CONFIG_MMU) += uaccess.o lib-$(CONFIG_64BIT) += tishift.o +lib-$(CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE) += string.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/riscv/lib/memcpy.S b/arch/riscv/lib/memcpy.S deleted file mode 100644 index 51ab716253fa..000000000000 --- a/arch/riscv/lib/memcpy.S +++ /dev/null @@ -1,108 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2013 Regents of the University of California - */ - -#include -#include - -/* void *memcpy(void *, const void *, size_t) */ -ENTRY(__memcpy) -WEAK(memcpy) - move t6, a0 /* Preserve return value */ - - /* Defer to byte-oriented copy for small sizes */ - sltiu a3, a2, 128 - bnez a3, 4f - /* Use word-oriented copy only if low-order bits match */ - andi a3, t6, SZREG-1 - andi a4, a1, SZREG-1 - bne a3, a4, 4f - - beqz a3, 2f /* Skip if already aligned */ - /* - * Round to nearest double word-aligned address - * greater than or equal to start address - */ - andi a3, a1, ~(SZREG-1) - addi a3, a3, SZREG - /* Handle initial misalignment */ - sub a4, a3, a1 -1: - lb a5, 0(a1) - addi a1, a1, 1 - sb a5, 0(t6) - addi t6, t6, 1 - bltu a1, a3, 1b - sub a2, a2, a4 /* Update count */ - -2: - andi a4, a2, ~((16*SZREG)-1) - beqz a4, 4f - add a3, a1, a4 -3: - REG_L a4, 0(a1) - REG_L a5, SZREG(a1) - REG_L a6, 2*SZREG(a1) - REG_L a7, 3*SZREG(a1) - REG_L t0, 4*SZREG(a1) - REG_L t1, 5*SZREG(a1) - REG_L t2, 6*SZREG(a1) - REG_L t3, 7*SZREG(a1) - REG_L t4, 8*SZREG(a1) - REG_L t5, 9*SZREG(a1) - REG_S a4, 0(t6) - REG_S a5, SZREG(t6) - REG_S a6, 2*SZREG(t6) - REG_S a7, 3*SZREG(t6) - REG_S t0, 4*SZREG(t6) - REG_S t1, 5*SZREG(t6) - REG_S t2, 6*SZREG(t6) - REG_S t3, 7*SZREG(t6) - REG_S t4, 8*SZREG(t6) - REG_S t5, 9*SZREG(t6) - REG_L a4, 10*SZREG(a1) - REG_L a5, 11*SZREG(a1) - REG_L a6, 12*SZREG(a1) - REG_L a7, 13*SZREG(a1) - REG_L t0, 14*SZREG(a1) - REG_L t1, 15*SZREG(a1) - addi a1, a1, 16*SZREG - REG_S a4, 10*SZREG(t6) - REG_S a5, 11*SZREG(t6) - REG_S a6, 12*SZREG(t6) - REG_S a7, 13*SZREG(t6) - REG_S t0, 14*SZREG(t6) - REG_S t1, 15*SZREG(t6) - addi t6, t6, 16*SZREG - bltu a1, a3, 3b - andi a2, a2, (16*SZREG)-1 /* Update count */ - -4: - /* Handle trailing misalignment */ - beqz a2, 6f - add a3, a1, a2 - - /* Use word-oriented copy if co-aligned to word boundary */ - or a5, a1, t6 - or a5, a5, a3 - andi a5, a5, 3 - bnez a5, 5f -7: - lw a4, 0(a1) - addi a1, a1, 4 - sw a4, 0(t6) - addi t6, t6, 4 - bltu a1, a3, 7b - - ret - -5: - lb a4, 0(a1) - addi a1, a1, 1 - sb a4, 0(t6) - addi t6, t6, 1 - bltu a1, a3, 5b -6: - ret -END(__memcpy) diff --git a/arch/riscv/lib/string.c b/arch/riscv/lib/string.c new file mode 100644 index 000000000000..bfc912ee23f8 --- /dev/null +++ b/arch/riscv/lib/string.c @@ -0,0 +1,90 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * String functions optimized for hardware which doesn't + * handle unaligned memory accesses efficiently. + * + * Copyright (C) 2021 Matteo Croce + */ + +#include +#include + +/* Minimum size for a word copy to be convenient */ +#define BYTES_LONG sizeof(long) +#define WORD_MASK (BYTES_LONG - 1) +#define MIN_THRESHOLD (BYTES_LONG * 2) + +/* convenience union to avoid cast between different pointer types */ +union types { + u8 *as_u8; + unsigned long *as_ulong; + uintptr_t as_uptr; +}; + +union const_types { + const u8 *as_u8; + unsigned long *as_ulong; + uintptr_t as_uptr; +}; + +void *__memcpy(void *dest, const void *src, size_t count) +{ + union const_types s = { .as_u8 = src }; + union types d = { .as_u8 = dest }; + int distance = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + if (count < MIN_THRESHOLD) + goto copy_remainder; + + /* Copy a byte at time until destination is aligned. */ + for (; d.as_uptr & WORD_MASK; count--) + *d.as_u8++ = *s.as_u8++; + + distance = s.as_uptr & WORD_MASK; + } + + if (distance) { + unsigned long last, next; + + /* + * s is distance bytes ahead of d, and d just reached + * the alignment boundary. Move s backward to word align it + * and shift data to compensate for distance, in order to do + * word-by-word copy. + */ + s.as_u8 -= distance; + + next = s.as_ulong[0]; + for (; count >= BYTES_LONG; count -= BYTES_LONG) { + last = next; + next = s.as_ulong[1]; + + d.as_ulong[0] = last >> (distance * 8) | + next << ((BYTES_LONG - distance) * 8); + + d.as_ulong++; + s.as_ulong++; + } + + /* Restore s with the original offset. */ + s.as_u8 += distance; + } else { + /* + * If the source and dest lower bits are the same, do a simple + * 32/64 bit wide copy. + */ + for (; count >= BYTES_LONG; count -= BYTES_LONG) + *d.as_ulong++ = *s.as_ulong++; + } + +copy_remainder: + while (count--) + *d.as_u8++ = *s.as_u8++; + + return dest; +} +EXPORT_SYMBOL(__memcpy); + +void *memcpy(void *dest, const void *src, size_t count) __weak __alias(__memcpy); +EXPORT_SYMBOL(memcpy); From patchwork Sun Sep 19 19:21:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12504407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EFC8C433EF for ; Sun, 19 Sep 2021 19:22:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 18A8D610F9 for ; Sun, 19 Sep 2021 19:22:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 18A8D610F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y7w8SFUnyGkVk4Uj7Q5wj8IBu4+itFEkXxNaG9YZfbE=; b=qoXNP5G/hNsED5 gPx3r50KeTjv/AjBI9TRydp0l6JhaKNY1gLpFaOtvtpo73lve2WNUOsMtI6BcJY6VfhxqXXInfYzu Thtnn9TMwW9PG8dFe0QF0thUDb2Yu+GgND36VjFKWLS05f1HWjeHsrDjEyHBVyYS8sn7JCNURAzha 7LK8eDl322pPtPuec8SOdtquQZ+hwAGPBzMq98c6kYGWjssDldPYqHYAdOhUL7JyJ4fu7FNXg8mpd J2UUv+5NfiLh4Rv3MiWybms222GVRltvd3v8CtNs7cbxg/lPvJ+xqoRGEeoBu2N6V+G7JQqjWRevs 0htzlHYzkESTQUVWjiAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Nk-0005w6-QA; Sun, 19 Sep 2021 19:21:44 +0000 Received: from mail-ed1-f41.google.com ([209.85.208.41]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Nf-0005uV-LG for linux-riscv@lists.infradead.org; Sun, 19 Sep 2021 19:21:41 +0000 Received: by mail-ed1-f41.google.com with SMTP id t6so52110227edi.9 for ; Sun, 19 Sep 2021 12:21:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pjE0ubQ/5AkzTkA6WniQQCbUuPXSsrsoNlitv6kv9k8=; b=1cs7JYxz+wHQ1tn6CRKxXMjtXAzwVnoHiSFItqo4nPcgnM1hTkiqln6/a/yIPbklCq 3TSLZaqnXtsjfQO+LdoauwreGoK7PqSdkhVOKfc19S1lteFJ7WjbVHuYMuG40tmoMPUs gZu5cKOS/5c9CE5qAWE4Z1/D08uXB1I1rcy7qpc4Ddy15a7dFkNAqnt/Na0BuYnYjo2I KqCVR7/N99Vfou2uNQuPH7s5pM+YSr2Hru3MQkwsJuEPuZs14xNzoRfuSdVq5H/qvnnA IpahJ/t48J1l918QZrn+X6x8KtX1WgKmeSw0LbdtJO06/3mGSnCbmVnh5Hd1mHOfG+Kb rnBA== X-Gm-Message-State: AOAM532kncmZ9cv4GYYOaiam8xnhYFIft5xghfE009Z3ONbxbjHGi60M cS76jd36gzWpLRhAPX+xGFeyEM5z3Nbuzg== X-Google-Smtp-Source: ABdhPJw5zeyORRHnVai05BLYBsYL/Y+fta+pVTdQS96Kt7O0yRJwFffH1PADerc7Ja/Tr/4r/Dw1uw== X-Received: by 2002:a05:6402:16d1:: with SMTP id r17mr19838901edx.378.1632079298168; Sun, 19 Sep 2021 12:21:38 -0700 (PDT) Received: from turbo.teknoraver.net (net-2-34-36-22.cust.vodafonedsl.it. [2.34.36.22]) by smtp.gmail.com with ESMTPSA id 10sm5208525eju.12.2021.09.19.12.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Sep 2021 12:21:37 -0700 (PDT) From: Matteo Croce To: linux-riscv@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Emil Renner Berthing , Akira Tsukamoto , Drew Fustini , Bin Meng , David Laight , Guo Ren , Christoph Hellwig Subject: [PATCH v4 2/3] riscv: optimized memmove Date: Sun, 19 Sep 2021 21:21:03 +0200 Message-Id: <20210919192104.98592-3-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210919192104.98592-1-mcroce@linux.microsoft.com> References: <20210919192104.98592-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210919_122139_743864_01606015 X-CRM114-Status: GOOD ( 18.15 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Matteo Croce When the destination buffer is before the source one, or when the buffers doesn't overlap, it's safe to use memcpy() instead, which is optimized to use a bigger data size possible. Signed-off-by: Matteo Croce Reported-by: kernel test robot Reported-by: kernel test robot --- arch/riscv/include/asm/string.h | 6 ++-- arch/riscv/kernel/riscv_ksyms.c | 2 -- arch/riscv/lib/Makefile | 1 - arch/riscv/lib/memmove.S | 64 --------------------------------- arch/riscv/lib/string.c | 23 ++++++++++++ 5 files changed, 26 insertions(+), 70 deletions(-) delete mode 100644 arch/riscv/lib/memmove.S diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h index 6b5d6fc3eab4..25d9b9078569 100644 --- a/arch/riscv/include/asm/string.h +++ b/arch/riscv/include/asm/string.h @@ -17,11 +17,11 @@ extern asmlinkage void *__memset(void *, int, size_t); #define __HAVE_ARCH_MEMCPY extern void *memcpy(void *dest, const void *src, size_t count); extern void *__memcpy(void *dest, const void *src, size_t count); +#define __HAVE_ARCH_MEMMOVE +extern void *memmove(void *dest, const void *src, size_t count); +extern void *__memmove(void *dest, const void *src, size_t count); #endif -#define __HAVE_ARCH_MEMMOVE -extern asmlinkage void *memmove(void *, const void *, size_t); -extern asmlinkage void *__memmove(void *, const void *, size_t); /* For those files which don't want to check by kasan. */ #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) #define memcpy(dst, src, len) __memcpy(dst, src, len) diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c index 3f6d512a5b97..361565c4db7e 100644 --- a/arch/riscv/kernel/riscv_ksyms.c +++ b/arch/riscv/kernel/riscv_ksyms.c @@ -10,6 +10,4 @@ * Assembly functions that may be used (directly or indirectly) by modules */ EXPORT_SYMBOL(memset); -EXPORT_SYMBOL(memmove); EXPORT_SYMBOL(__memset); -EXPORT_SYMBOL(__memmove); diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 2ffe85d4baee..484f5ff7b508 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -1,7 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only lib-y += delay.o lib-y += memset.o -lib-y += memmove.o lib-$(CONFIG_MMU) += uaccess.o lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE) += string.o diff --git a/arch/riscv/lib/memmove.S b/arch/riscv/lib/memmove.S deleted file mode 100644 index 07d1d2152ba5..000000000000 --- a/arch/riscv/lib/memmove.S +++ /dev/null @@ -1,64 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ - -#include -#include - -ENTRY(__memmove) -WEAK(memmove) - move t0, a0 - move t1, a1 - - beq a0, a1, exit_memcpy - beqz a2, exit_memcpy - srli t2, a2, 0x2 - - slt t3, a0, a1 - beqz t3, do_reverse - - andi a2, a2, 0x3 - li t4, 1 - beqz t2, byte_copy - -word_copy: - lw t3, 0(a1) - addi t2, t2, -1 - addi a1, a1, 4 - sw t3, 0(a0) - addi a0, a0, 4 - bnez t2, word_copy - beqz a2, exit_memcpy - j byte_copy - -do_reverse: - add a0, a0, a2 - add a1, a1, a2 - andi a2, a2, 0x3 - li t4, -1 - beqz t2, reverse_byte_copy - -reverse_word_copy: - addi a1, a1, -4 - addi t2, t2, -1 - lw t3, 0(a1) - addi a0, a0, -4 - sw t3, 0(a0) - bnez t2, reverse_word_copy - beqz a2, exit_memcpy - -reverse_byte_copy: - addi a0, a0, -1 - addi a1, a1, -1 - -byte_copy: - lb t3, 0(a1) - addi a2, a2, -1 - sb t3, 0(a0) - add a1, a1, t4 - add a0, a0, t4 - bnez a2, byte_copy - -exit_memcpy: - move a0, t0 - move a1, t1 - ret -END(__memmove) diff --git a/arch/riscv/lib/string.c b/arch/riscv/lib/string.c index bfc912ee23f8..da033af8fe2f 100644 --- a/arch/riscv/lib/string.c +++ b/arch/riscv/lib/string.c @@ -88,3 +88,26 @@ EXPORT_SYMBOL(__memcpy); void *memcpy(void *dest, const void *src, size_t count) __weak __alias(__memcpy); EXPORT_SYMBOL(memcpy); + +/* + * Simply check if the buffer overlaps an call memcpy() in case, + * otherwise do a simple one byte at time backward copy. + */ +void *__memmove(void *dest, const void *src, size_t count) +{ + if (dest < src || src + count <= dest) + return memcpy(dest, src, count); + + if (dest > src) { + const char *s = src + count; + char *tmp = dest + count; + + while (count--) + *--tmp = *--s; + } + return dest; +} +EXPORT_SYMBOL(__memmove); + +void *memmove(void *dest, const void *src, size_t count) __weak __alias(__memmove); +EXPORT_SYMBOL(memmove); From patchwork Sun Sep 19 19:21:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12504405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0550FC433F5 for ; Sun, 19 Sep 2021 19:22:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5F10610F9 for ; Sun, 19 Sep 2021 19:22:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5F10610F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=F3ByZFFOFmUT4Ku+8SI+6KUoALySuUMQsvn+hn9vBBI=; b=mfjs/qohO0x3qi hUdUKOcPEXgyC/flmUdrPy5kVw9AjPNLi8xBqRvM5uSfX+LS+7xGg0BHiYp8+koA5XAJgoNypCTzg sxTR3JkHfsLuL5HLob7QTjOEZDBiuYcV1BUSDjokcVYQwwYbvQccX19Yw7hvTFTJehdIbQUTprCrU DhXDqIWxIkwArH3pBPLadBbvMOz5Xowpwy7TD7htY//BRkFdbLP0chuh6QUPdMjehQGLC/jmhrWCX dGCw8C6uyULeRaTviBAUz2HNEzNMQP4olRETfryodGHflXEOoxHe/YgYpEZd11/RfWtgHP1OEEZNG qtvcPXGSETUbxKr8Yr3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Nm-0005wU-Ax; Sun, 19 Sep 2021 19:21:46 +0000 Received: from mail-ed1-f46.google.com ([209.85.208.46]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mS2Ng-0005ux-QE for linux-riscv@lists.infradead.org; Sun, 19 Sep 2021 19:21:42 +0000 Received: by mail-ed1-f46.google.com with SMTP id h17so52197756edj.6 for ; Sun, 19 Sep 2021 12:21:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6WKTUDHCnT3OPIFvure+beopQF+eNqtWBPQ4gjVkMTI=; b=4XqFQfIG0F4c4mx/Fs4IbF132N8Yv/okZ3WdZn5iTWLZqT2BN/ONbcFV4nBAvZpXLx eSDzS6UbOEw9UZomFFBRocZ22vod9h3/S3hPcJDpYMsjOoxx+REalVr/IAPz92jf6U7U m12NBqNeLaVSBrTeWrPzn4fRCUiMBPkkX88Zifw6YD4yBn9MgH3hSCtrs/KVMW/UDXvm AFy6JNxMaa1/hwSujupIIGOnHsKIfy9gWdUGRt8kDzB5KLfg0FeDWTl1KWmO/5+d1BnC hKiVE3pdgcMeClhybXyRP9HMKcT33ityA/6Of066y4WP2IfRXAwVbVqLrNp6BvuwTuVq tMkA== X-Gm-Message-State: AOAM531DCPFJxiueEitoObC5k5tAULh+7ydZ4WLShYyZRN4ugpQ6Ghmv hy5zHZzDAcBadKDW/FYyDJGEdmVfkqzgPQ== X-Google-Smtp-Source: ABdhPJzBhq+wusZQHO1R26VyDP34hQgE/m5908flWiPlxtv1eZ2hpsrwywzIJFpM61oQ/n2aK066ww== X-Received: by 2002:a05:6402:4247:: with SMTP id g7mr25350958edb.287.1632079299299; Sun, 19 Sep 2021 12:21:39 -0700 (PDT) Received: from turbo.teknoraver.net (net-2-34-36-22.cust.vodafonedsl.it. [2.34.36.22]) by smtp.gmail.com with ESMTPSA id 10sm5208525eju.12.2021.09.19.12.21.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Sep 2021 12:21:38 -0700 (PDT) From: Matteo Croce To: linux-riscv@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Emil Renner Berthing , Akira Tsukamoto , Drew Fustini , Bin Meng , David Laight , Guo Ren , Christoph Hellwig Subject: [PATCH v4 3/3] riscv: optimized memset Date: Sun, 19 Sep 2021 21:21:04 +0200 Message-Id: <20210919192104.98592-4-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210919192104.98592-1-mcroce@linux.microsoft.com> References: <20210919192104.98592-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210919_122140_908054_6FDFC9B4 X-CRM114-Status: GOOD ( 22.91 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Matteo Croce The generic memset is defined as a byte at time write. This is always safe, but it's slower than a 4 byte or even 8 byte write. Write a generic memset which fills the data one byte at time until the destination is aligned, then fills using the largest size allowed, and finally fills the remaining data one byte at time. Signed-off-by: Matteo Croce --- arch/riscv/include/asm/string.h | 10 +-- arch/riscv/kernel/Makefile | 1 - arch/riscv/kernel/riscv_ksyms.c | 13 ---- arch/riscv/lib/Makefile | 1 - arch/riscv/lib/memset.S | 113 -------------------------------- arch/riscv/lib/string.c | 41 ++++++++++++ 6 files changed, 44 insertions(+), 135 deletions(-) delete mode 100644 arch/riscv/kernel/riscv_ksyms.c delete mode 100644 arch/riscv/lib/memset.S diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h index 25d9b9078569..90500635035a 100644 --- a/arch/riscv/include/asm/string.h +++ b/arch/riscv/include/asm/string.h @@ -6,14 +6,10 @@ #ifndef _ASM_RISCV_STRING_H #define _ASM_RISCV_STRING_H -#include -#include - -#define __HAVE_ARCH_MEMSET -extern asmlinkage void *memset(void *, int, size_t); -extern asmlinkage void *__memset(void *, int, size_t); - #ifdef CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE +#define __HAVE_ARCH_MEMSET +extern void *memset(void *s, int c, size_t count); +extern void *__memset(void *s, int c, size_t count); #define __HAVE_ARCH_MEMCPY extern void *memcpy(void *dest, const void *src, size_t count); extern void *__memcpy(void *dest, const void *src, size_t count); diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 3397ddac1a30..fecf03822435 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -31,7 +31,6 @@ obj-y += syscall_table.o obj-y += sys_riscv.o obj-y += time.o obj-y += traps.o -obj-y += riscv_ksyms.o obj-y += stacktrace.o obj-y += cacheinfo.o obj-y += patch.o diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c deleted file mode 100644 index 361565c4db7e..000000000000 --- a/arch/riscv/kernel/riscv_ksyms.c +++ /dev/null @@ -1,13 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Copyright (C) 2017 Zihao Yu - */ - -#include -#include - -/* - * Assembly functions that may be used (directly or indirectly) by modules - */ -EXPORT_SYMBOL(memset); -EXPORT_SYMBOL(__memset); diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 484f5ff7b508..e33263cc622a 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -1,6 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only lib-y += delay.o -lib-y += memset.o lib-$(CONFIG_MMU) += uaccess.o lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE) += string.o diff --git a/arch/riscv/lib/memset.S b/arch/riscv/lib/memset.S deleted file mode 100644 index 34c5360c6705..000000000000 --- a/arch/riscv/lib/memset.S +++ /dev/null @@ -1,113 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2013 Regents of the University of California - */ - - -#include -#include - -/* void *memset(void *, int, size_t) */ -ENTRY(__memset) -WEAK(memset) - move t0, a0 /* Preserve return value */ - - /* Defer to byte-oriented fill for small sizes */ - sltiu a3, a2, 16 - bnez a3, 4f - - /* - * Round to nearest XLEN-aligned address - * greater than or equal to start address - */ - addi a3, t0, SZREG-1 - andi a3, a3, ~(SZREG-1) - beq a3, t0, 2f /* Skip if already aligned */ - /* Handle initial misalignment */ - sub a4, a3, t0 -1: - sb a1, 0(t0) - addi t0, t0, 1 - bltu t0, a3, 1b - sub a2, a2, a4 /* Update count */ - -2: /* Duff's device with 32 XLEN stores per iteration */ - /* Broadcast value into all bytes */ - andi a1, a1, 0xff - slli a3, a1, 8 - or a1, a3, a1 - slli a3, a1, 16 - or a1, a3, a1 -#ifdef CONFIG_64BIT - slli a3, a1, 32 - or a1, a3, a1 -#endif - - /* Calculate end address */ - andi a4, a2, ~(SZREG-1) - add a3, t0, a4 - - andi a4, a4, 31*SZREG /* Calculate remainder */ - beqz a4, 3f /* Shortcut if no remainder */ - neg a4, a4 - addi a4, a4, 32*SZREG /* Calculate initial offset */ - - /* Adjust start address with offset */ - sub t0, t0, a4 - - /* Jump into loop body */ - /* Assumes 32-bit instruction lengths */ - la a5, 3f -#ifdef CONFIG_64BIT - srli a4, a4, 1 -#endif - add a5, a5, a4 - jr a5 -3: - REG_S a1, 0(t0) - REG_S a1, SZREG(t0) - REG_S a1, 2*SZREG(t0) - REG_S a1, 3*SZREG(t0) - REG_S a1, 4*SZREG(t0) - REG_S a1, 5*SZREG(t0) - REG_S a1, 6*SZREG(t0) - REG_S a1, 7*SZREG(t0) - REG_S a1, 8*SZREG(t0) - REG_S a1, 9*SZREG(t0) - REG_S a1, 10*SZREG(t0) - REG_S a1, 11*SZREG(t0) - REG_S a1, 12*SZREG(t0) - REG_S a1, 13*SZREG(t0) - REG_S a1, 14*SZREG(t0) - REG_S a1, 15*SZREG(t0) - REG_S a1, 16*SZREG(t0) - REG_S a1, 17*SZREG(t0) - REG_S a1, 18*SZREG(t0) - REG_S a1, 19*SZREG(t0) - REG_S a1, 20*SZREG(t0) - REG_S a1, 21*SZREG(t0) - REG_S a1, 22*SZREG(t0) - REG_S a1, 23*SZREG(t0) - REG_S a1, 24*SZREG(t0) - REG_S a1, 25*SZREG(t0) - REG_S a1, 26*SZREG(t0) - REG_S a1, 27*SZREG(t0) - REG_S a1, 28*SZREG(t0) - REG_S a1, 29*SZREG(t0) - REG_S a1, 30*SZREG(t0) - REG_S a1, 31*SZREG(t0) - addi t0, t0, 32*SZREG - bltu t0, a3, 3b - andi a2, a2, SZREG-1 /* Update count */ - -4: - /* Handle trailing misalignment */ - beqz a2, 6f - add a3, t0, a2 -5: - sb a1, 0(t0) - addi t0, t0, 1 - bltu t0, a3, 5b -6: - ret -END(__memset) diff --git a/arch/riscv/lib/string.c b/arch/riscv/lib/string.c index da033af8fe2f..12daa71698fb 100644 --- a/arch/riscv/lib/string.c +++ b/arch/riscv/lib/string.c @@ -111,3 +111,44 @@ EXPORT_SYMBOL(__memmove); void *memmove(void *dest, const void *src, size_t count) __weak __alias(__memmove); EXPORT_SYMBOL(memmove); + +void *__memset(void *s, int c, size_t count) +{ + union types dest = { .as_u8 = s }; + + if (count >= MIN_THRESHOLD) { + unsigned long cu = (unsigned long)c; + + /* Compose an ulong with 'c' repeated 4/8 times */ +#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER + cu *= 0x0101010101010101UL; +#else + cu |= cu << 8; + cu |= cu << 16; + /* Suppress warning on 32 bit machines */ + cu |= (cu << 16) << 16; +#endif + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + /* + * Fill the buffer one byte at time until + * the destination is word aligned. + */ + for (; count && dest.as_uptr & WORD_MASK; count--) + *dest.as_u8++ = c; + } + + /* Copy using the largest size allowed */ + for (; count >= BYTES_LONG; count -= BYTES_LONG) + *dest.as_ulong++ = cu; + } + + /* copy the remainder */ + while (count--) + *dest.as_u8++ = c; + + return s; +} +EXPORT_SYMBOL(__memset); + +void *memset(void *s, int c, size_t count) __weak __alias(__memset); +EXPORT_SYMBOL(memset);